Merge branch 'master' into cli_db_commands

This commit is contained in:
bergquist 2016-12-09 12:36:05 +01:00
commit cd85e1f651
3934 changed files with 228149 additions and 815897 deletions

View File

@ -2,7 +2,7 @@
root = true
[*.go]
indent_style = tabs
indent_style = tab
indent_size = 2
charset = utf-8
trim_trailing_whitespace = true

View File

@ -12,11 +12,11 @@ grunt karma:dev
### Run tests for backend assets before commit
```
test -z "$(gofmt -s -l . | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
test -z "$(gofmt -s -l . | grep -v -E 'vendor/(github.com|golang.org|gopkg.in)' | tee /dev/stderr)"
```
### Run tests for frontend assets before commit
```
grunt test
godep go test -v ./pkg/...
npm test
go test -v ./pkg/...
```

View File

@ -1,7 +1,5 @@
* **I'm submitting a ...**
- [ ] Bug report
- [ ] Feature request
- [ ] Question / Support request: **Please do not** open a github issue. [Support Options](http://grafana.org/support/)
Please prefix your title with [Bug] or [Feature request]
For question please check [Support Options](http://grafana.org/support/). **Do not** open a github issue
Please include this information:
- What Grafana version are you using?
@ -11,7 +9,10 @@ Please include this information:
- What was the expected result?
- What happened instead?
**IMPORTANT** If it relates to metric data viz:
**IMPORTANT**
If it relates to *metric data viz*:
- An image or text representation of your metric query
- The raw query and response for the network request (check this in chrome dev tools network tab, here you can see metric requests and other request, please include the request body and request response)
If it relates to *alerting*
- An image of the test execution data fully expanded.

3
.gitignore vendored
View File

@ -25,6 +25,7 @@ public/css/*.min.css
*.swp
.idea/
*.iml
.vscode/
/data/*
/bin/*
@ -37,4 +38,4 @@ profile.cov
.notouch
/pkg/cmd/grafana-cli/grafana-cli
/pkg/cmd/grafana-server/grafana-server
/examples/*/dist
/examples/*/dist

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
test -z "$(gofmt -s -l . | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
test -z "$(gofmt -s -l . | grep -v vendor/src/ | tee /dev/stderr)"
if [ $? -gt 0 ]; then
echo "Some files aren't formatted, please run 'go fmt ./pkg/...' to format your source code before committing"
exit 1

View File

@ -1,20 +1,115 @@
# 4.0-pre (unreleased)
# 4.1-beta (unreleased)
### Enhancements
* **Postgres**: Add support for Certs for Postgres database [#6655](https://github.com/grafana/grafana/issues/6655)
* **Victorops**: Add VictorOps notification integration [#6411](https://github.com/grafana/grafana/issues/6411), thx [@ichekrygin](https://github.com/ichekrygin)
* **Opsgenie**: Add OpsGenie notification integratiion [#6687](https://github.com/grafana/grafana/issues/6687), thx [@kylemcc](https://github.com/kylemcc)
* **Singlestat**: New aggregation on singlestat panel [#6740](https://github.com/grafana/grafana/pull/6740), thx [@dirk-leroux](https://github.com/dirk-leroux)
* **Cloudwatch**: Make it possible to specify access and secret key on the data source config page [#6697](https://github.com/grafana/grafana/issues/6697)
* **Table**: Added Hidden Column Style for Table Panel [#5677](https://github.com/grafana/grafana/pull/5677), thx [@bmundt](https://github.com/bmundt)
* **Graph**: Shared crosshair option renamed to shared tooltip, shows tooltip on all graphs as you hover over one graph. [#1578](https://github.com/grafana/grafana/pull/1578), [#6274](https://github.com/grafana/grafana/pull/6274)
* **Elasticsearch**: Added support for Missing option (bucket) for terms aggregation [#4244](https://github.com/grafana/grafana/pull/4244), thx [@shanielh](https://github.com/shanielh)
* **Elasticsearch**: Added support for Elasticsearch 5.x [#6356](https://github.com/grafana/grafana/pull/6356), thx [@lpic10](https://github.com/lpic10)
### Bugfixes
* **API**: HTTP API for deleting org returning incorrect message for a non-existing org [#6679](https://github.com/grafana/grafana/issues/6679)
* **Dashboard**: Posting empty dashboard result in corrupted dashboard [#5443](https://github.com/grafana/grafana/issues/5443)
# 4.0.2 (2016-12-08)
### Enhancements
* **Playlist**: Add support for kiosk mode [#6727](https://github.com/grafana/grafana/issues/6727)
### Bugfixes
* **Alerting**: Add alert message to webhook notifications [#6807](https://github.com/grafana/grafana/issues/6807)
* **Alerting**: Fixes a bug where avg() reducer treated null as zero. [#6879](https://github.com/grafana/grafana/issues/6879)
* **PNG Rendering**: Fix for server side rendering when using non default http addr bind and domain setting [#6813](https://github.com/grafana/grafana/issues/6813)
* **PNG Rendering**: Fix for server side rendering when setting enforce_domain to true [#6769](https://github.com/grafana/grafana/issues/6769)
* **Webhooks**: Add content type json to outgoing webhooks [#6822](https://github.com/grafana/grafana/issues/6822)
* **Keyboard shortcut**: Fixed zoom out shortcut [#6837](https://github.com/grafana/grafana/issues/6837)
* **Webdav**: Adds basic auth headers to webdav uploader [#6779](https://github.com/grafana/grafana/issues/6779)
# 4.0.1 (2016-12-02)
> **Notice**
4.0.0 had serious connection pooling issue when using a data source in proxy access. This bug caused lots of resource issues
due to too many connections/file handles on the data source backend. This problem is fixed in this release.
### Bugfixes
* **Metrics**: Fixes nil pointer dereference on my arm build [#6749](https://github.com/grafana/grafana/issues/6749)
* **Data proxy**: Fixes a tcp pooling issue in the datasource reverse proxy [#6759](https://github.com/grafana/grafana/issues/6759)
# 4.0-stable (2016-11-29)
### Bugfixes
* **Server-side rendering**: Fixed address used when rendering panel via phantomjs and using non default http_addr config [#6660](https://github.com/grafana/grafana/issues/6660)
* **Graph panel**: Fixed graph panel tooltip sort order issue [#6648](https://github.com/grafana/grafana/issues/6648)
* **Unsaved changes**: You now navigate to the intended page after saving in the unsaved changes dialog [#6675](https://github.com/grafana/grafana/issues/6675)
* **TLS Client Auth**: Support for TLS client authentication for datasource proxies [#2316](https://github.com/grafana/grafana/issues/2316)
* **Alerts out of sync**: Saving dashboards with broken alerts causes sync problem[#6576](https://github.com/grafana/grafana/issues/6576)
* **Alerting**: Saving an alert with condition "HAS NO DATA" throws an error[#6701](https://github.com/grafana/grafana/issues/6701)
* **Config**: Improve error message when parsing broken config file [#6731](https://github.com/grafana/grafana/issues/6731)
* **Table**: Render empty dates as - instead of current date [#6728](https://github.com/grafana/grafana/issues/6728)
# 4.0-beta2 (2016-11-21)
### Bugfixes
* **Graph Panel**: Log base scale on right Y-axis had no effect, max value calc was not applied, [#6534](https://github.com/grafana/grafana/issues/6534)
* **Graph Panel**: Bar width if bars was only used in series override, [#6528](https://github.com/grafana/grafana/issues/6528)
* **UI/Browser**: Fixed issue with page/view header gradient border not showing in Safari, [#6530](https://github.com/grafana/grafana/issues/6530)
* **Cloudwatch**: Fixed cloudwatch datasource requesting to many datapoints, [#6544](https://github.com/grafana/grafana/issues/6544)
* **UX**: Panel Drop zone visible after duplicating panel, and when entering fullscreen/edit view, [#6598](https://github.com/grafana/grafana/issues/6598)
* **Templating**: Newly added variable was not visible directly only after dashboard reload, [#6622](https://github.com/grafana/grafana/issues/6622)
### Enhancements
* **Singlestat**: Support repeated template variables in prefix/postfix [#6595](https://github.com/grafana/grafana/issues/6595)
* **Templating**: Don't persist variable options with refresh option [#6586](https://github.com/grafana/grafana/issues/6586)
* **Alerting**: Add ability to have OR conditions (and mixing AND & OR) [#6579](https://github.com/grafana/grafana/issues/6579)
* **InfluxDB**: Fix for Ad-Hoc Filters variable & changing dashboards [#6821](https://github.com/grafana/grafana/issues/6821)
# 4.0-beta1 (2016-11-09)
### Enhancements
* **Login**: Adds option to disable username/password logins, closes [#4674](https://github.com/grafana/grafana/issues/4674)
* **SingleStat**: Add seriename as option in singlestat panel, closes [#4740](https://github.com/grafana/grafana/issues/4740)
* **Localization**: Week start day now dependant on browser locale setting, closes [#3003](https://github.com/grafana/grafana/issues/3003)
* **Templating**: Update panel repeats for variables that change on time refresh, closes [#5021](https://github.com/grafana/grafana/issues/5021)
* **Templating**: Add support for numeric and alphabetical sorting of variable values, closes [#2839](https://github.com/grafana/grafana/issues/2839)
* **Elasticsearch**: Support to set Precision Threshold for Unique Count metric, closes [#4689](https://github.com/grafana/grafana/issues/4689)
* **Navigation**: Add search to org swithcer, closes [#2609](https://github.com/grafana/grafana/issues/2609)
* **Database**: Allow database config using one propertie, closes [#5456](https://github.com/grafana/grafana/pull/5456)
* **Graphite**: Add support for groupByNode, closes [#5613](https://github.com/grafana/grafana/pull/5613)
* **Graphite**: Add support for groupByNodes, closes [#5613](https://github.com/grafana/grafana/pull/5613)
* **Influxdb**: Add support for elapsed(), closes [#5827](https://github.com/grafana/grafana/pull/5827)
* **OpenTSDB**: Add support for explicitTags for OpenTSDB>=2.3, closes [#6360](https://github.com/grafana/grafana/pull/6361)
* **OAuth**: Add support for generic oauth, closes [#4718](https://github.com/grafana/grafana/pull/4718)
* **Cloudwatch**: Add support to expand multi select template variable, closes [#5003](https://github.com/grafana/grafana/pull/5003)
* **Background Tasks**: Now support automatic purging of old snapshots, closes [#4087](https://github.com/grafana/grafana/issues/4087)
* **Background Tasks**: Now support automatic purging of old rendered images, closes [#2172](https://github.com/grafana/grafana/issues/2172)
* **Dashboard**: After inactivity hide nav/row actions, fade to nice clean view, can be toggled with `d v`, also added kiosk mode, toggled via `d k` [#6476](https://github.com/grafana/grafana/issues/6476)
* **Dashboard**: Improved dashboard row menu & add panel UX [#6442](https://github.com/grafana/grafana/issues/6442)
* **Graph Panel**: Support for stacking null values [#2912](https://github.com/grafana/grafana/issues/2912), [#6287](https://github.com/grafana/grafana/issues/6287), thanks @benrubson!
### Breaking changes
* **SystemD**: Change systemd description, closes [#5971](https://github.com/grafana/grafana/pull/5971)
* **lodash upgrade**: Upgraded lodash from 2.4.2 to 4.15.0, this contains a number of breaking changes that could effect plugins. closes [#6021](https://github.com/grafana/grafana/pull/6021)
### Bug fixes
* **Table Panel**: Fixed problem when switching to Mixed datasource in metrics tab, fixes [#5999](https://github.com/grafana/grafana/pull/5999)
* **Playlist**: Fixed problem with play order not matching order defined in playlist, fixes [#5467](https://github.com/grafana/grafana/pull/5467)
* **Graph panel**: Fixed problem with auto decimals on y axis when datamin=datamax, fixes [#6070](https://github.com/grafana/grafana/pull/6070)
* **Snapshot**: Can view embedded panels/png rendered panels in snapshots without login, fixes [#3769](https://github.com/grafana/grafana/pull/3769)
* **Elasticsearch**: Fix for query template variable when looking up terms without query, no longer relies on elasticsearch default field, fixes [#3887](https://github.com/grafana/grafana/pull/3887)
* **Elasticsearch**: Fix for displaying IP address used in terms aggregations, fixes [#4393](https://github.com/grafana/grafana/pull/4393)
* **PNG Rendering**: Fix for server side rendering when using auth proxy, fixes [#5906](https://github.com/grafana/grafana/pull/5906)
* **OpenTSDB**: Fixed multi-value nested templating for opentsdb, fixes [#6455](https://github.com/grafana/grafana/pull/6455)
* **Playlist**: Remove playlist items when dashboard is removed, fixes [#6292](https://github.com/grafana/grafana/issues/6292)
# 3.1.2 (unreleased)
* **Templating**: Fixed issue when combining row & panel repeats, fixes [#5790](https://github.com/grafana/grafana/issues/5790)
* **Drag&Drop**: Fixed issue with drag and drop in latest Chrome(51+), fixes [#5767](https://github.com/grafana/grafana/issues/5767)
* **Internal Metrics**: Fixed issue with dots in instance_name when sending internal metrics to Graphite, fixes [#5739](https://github.com/grafana/grafana/issues/5739)
* **Grafana-CLI**: Add default plugin path for MAC OS, fixes [#5806](https://github.com/grafana/grafana/issues/5806)
* **Grafana-CLI**: Improve error message for upgrade-all command, fixes [#5885](https://github.com/grafana/grafana/issues/5885)
# 3.1.1 (2016-08-01)
* **IFrame embedding**: Fixed issue of using full iframe height, fixes [#5605](https://github.com/grafana/grafana/issues/5606)
@ -24,7 +119,6 @@
* **Graphite**: Fixed issue with mixed data sources and Graphite, fixes [#5617](https://github.com/grafana/grafana/issues/5617)
* **Templating**: Fixed issue with template variable query was issued multiple times during dashboard load, fixes [#5637](https://github.com/grafana/grafana/issues/5637)
* **Zoom**: Fixed issues with zoom in and out on embedded (iframed) panel, fixes [#4489](https://github.com/grafana/grafana/issues/4489), [#5666](https://github.com/grafana/grafana/issues/5666)
* **Templating**: Row/Panel repeat issue when saving dashboard caused dupes to appear, fixes [#5591](https://github.com/grafana/grafana/issues/5591)
# 3.1.0 stable (2016-07-12)

372
Godeps/Godeps.json generated
View File

@ -1,372 +0,0 @@
{
"ImportPath": "github.com/grafana/grafana",
"GoVersion": "go1.5.1",
"GodepVersion": "v60",
"Packages": [
"./pkg/..."
],
"Deps": [
{
"ImportPath": "github.com/BurntSushi/toml",
"Comment": "v0.1.0-21-g056c9bc",
"Rev": "056c9bc7be7190eaa7715723883caffa5f8fa3e4"
},
{
"ImportPath": "github.com/Unknwon/com",
"Rev": "d9bcf409c8a368d06c9b347705c381e7c12d54df"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/awserr",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/awsutil",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/client",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/client/metadata",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/corehandlers",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/defaults",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/ec2metadata",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/request",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/session",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/endpoints",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/ec2query",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/rest",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/signer/v4",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/waiter",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatch",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/ec2",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/sts",
"Comment": "v1.4.1",
"Rev": "f80e7d0182a463dff0c0da6bbed57f21369d4346"
},
{
"ImportPath": "github.com/bmizerany/assert",
"Comment": "release.r60-6-ge17e998",
"Rev": "e17e99893cb6509f428e1728281c2ad60a6b31e3"
},
{
"ImportPath": "github.com/bradfitz/gomemcache/memcache",
"Comment": "release.r60-40-g72a6864",
"Rev": "72a68649ba712ee7c4b5b4a943a626bcd7d90eb8"
},
{
"ImportPath": "github.com/codegangsta/cli",
"Comment": "1.2.0-187-gc31a797",
"Rev": "c31a7975863e7810c92e2e288a9ab074f9a88f29"
},
{
"ImportPath": "github.com/davecgh/go-spew/spew",
"Rev": "2df174808ee097f90d259e432cc04442cf60be21"
},
{
"ImportPath": "github.com/fatih/color",
"Comment": "v0.1-16-g4f7bcef",
"Rev": "4f7bcef27eec7925456d0c30c5e7b0408b3339be"
},
{
"ImportPath": "github.com/franela/goreq",
"Rev": "3ddeded65be21dacb5a2e2d0b95af9ff6862a2b5"
},
{
"ImportPath": "github.com/go-ini/ini",
"Comment": "v0-48-g060d7da",
"Rev": "060d7da055ba6ec5ea7a31f116332fe5efa04ce0"
},
{
"ImportPath": "github.com/go-ldap/ldap",
"Comment": "v2.2.1",
"Rev": "07a7330929b9ee80495c88a4439657d89c7dbd87"
},
{
"ImportPath": "github.com/go-macaron/binding",
"Rev": "2502aaf4bce3a4e6451b4610847bfb8dffdb6266"
},
{
"ImportPath": "github.com/go-macaron/gzip",
"Rev": "4938e9be6b279d8426cb1c89a6bcf7af70b0c21d"
},
{
"ImportPath": "github.com/go-macaron/inject",
"Rev": "c5ab7bf3a307593cd44cb272d1a5beea473dd072"
},
{
"ImportPath": "github.com/go-macaron/session",
"Rev": "66031fcb37a0fff002a1f028eb0b3a815c78306b"
},
{
"ImportPath": "github.com/go-macaron/session/memcache",
"Rev": "66031fcb37a0fff002a1f028eb0b3a815c78306b"
},
{
"ImportPath": "github.com/go-macaron/session/mysql",
"Rev": "66031fcb37a0fff002a1f028eb0b3a815c78306b"
},
{
"ImportPath": "github.com/go-macaron/session/postgres",
"Rev": "66031fcb37a0fff002a1f028eb0b3a815c78306b"
},
{
"ImportPath": "github.com/go-macaron/session/redis",
"Rev": "66031fcb37a0fff002a1f028eb0b3a815c78306b"
},
{
"ImportPath": "github.com/go-sql-driver/mysql",
"Comment": "v1.2-171-g267b128",
"Rev": "267b128680c46286b9ca13475c3cca5de8f79bd7"
},
{
"ImportPath": "github.com/go-stack/stack",
"Comment": "v1.5.2",
"Rev": "100eb0c0a9c5b306ca2fb4f165df21d80ada4b82"
},
{
"ImportPath": "github.com/go-xorm/core",
"Comment": "v0.4.4-7-g9e608f7",
"Rev": "9e608f7330b9d16fe2818cfe731128b3f156cb9a"
},
{
"ImportPath": "github.com/go-xorm/xorm",
"Comment": "v0.4.4-44-gf561133",
"Rev": "f56113384f2c63dfe4cd8e768e349f1c35122b58"
},
{
"ImportPath": "github.com/gorilla/websocket",
"Rev": "c45a635370221f34fea2d5163fd156fcb4e38e8a"
},
{
"ImportPath": "github.com/gosimple/slug",
"Rev": "8d258463b4459f161f51d6a357edacd3eef9d663"
},
{
"ImportPath": "github.com/hashicorp/go-version",
"Rev": "7e3c02b30806fa5779d3bdfc152ce4c6f40e7b38"
},
{
"ImportPath": "github.com/inconshreveable/log15",
"Comment": "v2.3-61-g20bca5a",
"Rev": "20bca5a7a57282e241fac83ec9ea42538027f1c1"
},
{
"ImportPath": "github.com/inconshreveable/log15/term",
"Comment": "v2.3-61-g20bca5a",
"Rev": "20bca5a7a57282e241fac83ec9ea42538027f1c1"
},
{
"ImportPath": "github.com/jmespath/go-jmespath",
"Comment": "0.2.2",
"Rev": "3433f3ea46d9f8019119e7dd41274e112a2359a9"
},
{
"ImportPath": "github.com/jtolds/gls",
"Rev": "f1ac7f4f24f50328e6bc838ca4437d1612a0243c"
},
{
"ImportPath": "github.com/klauspost/compress/flate",
"Rev": "7b02889a2005228347aef0e76beeaee564d82f8c"
},
{
"ImportPath": "github.com/klauspost/compress/gzip",
"Rev": "7b02889a2005228347aef0e76beeaee564d82f8c"
},
{
"ImportPath": "github.com/klauspost/cpuid",
"Rev": "349c675778172472f5e8f3a3e0fe187e302e5a10"
},
{
"ImportPath": "github.com/klauspost/crc32",
"Rev": "6834731faf32e62a2dd809d99fb24d1e4ae5a92d"
},
{
"ImportPath": "github.com/kr/pretty",
"Comment": "go.weekly.2011-12-22-27-ge6ac2fc",
"Rev": "e6ac2fc51e89a3249e82157fa0bb7a18ef9dd5bb"
},
{
"ImportPath": "github.com/kr/text",
"Rev": "bb797dc4fb8320488f47bf11de07a733d7233e1f"
},
{
"ImportPath": "github.com/lib/pq",
"Comment": "go1.0-cutoff-13-g19eeca3",
"Rev": "19eeca3e30d2577b1761db471ec130810e67f532"
},
{
"ImportPath": "github.com/lib/pq/oid",
"Comment": "go1.0-cutoff-13-g19eeca3",
"Rev": "19eeca3e30d2577b1761db471ec130810e67f532"
},
{
"ImportPath": "github.com/mattn/go-colorable",
"Rev": "9cbef7c35391cca05f15f8181dc0b18bc9736dbb"
},
{
"ImportPath": "github.com/mattn/go-isatty",
"Rev": "56b76bdf51f7708750eac80fa38b952bb9f32639"
},
{
"ImportPath": "github.com/mattn/go-sqlite3",
"Comment": "v1.1.0-67-g7204887",
"Rev": "7204887cf3a42df1cfaa5505dc3a3427f6dded8b"
},
{
"ImportPath": "github.com/rainycape/unidecode",
"Rev": "836ef0a715aedf08a12d595ed73ec8ed5b288cac"
},
{
"ImportPath": "github.com/smartystreets/goconvey/convey",
"Comment": "1.5.0-356-gfbc0a1c",
"Rev": "fbc0a1c888f9f96263f9a559d1769905245f1123"
},
{
"ImportPath": "github.com/smartystreets/goconvey/convey/assertions",
"Comment": "1.5.0-356-gfbc0a1c",
"Rev": "fbc0a1c888f9f96263f9a559d1769905245f1123"
},
{
"ImportPath": "github.com/smartystreets/goconvey/convey/assertions/oglematchers",
"Comment": "1.5.0-356-gfbc0a1c",
"Rev": "fbc0a1c888f9f96263f9a559d1769905245f1123"
},
{
"ImportPath": "github.com/smartystreets/goconvey/convey/gotest",
"Comment": "1.5.0-356-gfbc0a1c",
"Rev": "fbc0a1c888f9f96263f9a559d1769905245f1123"
},
{
"ImportPath": "github.com/smartystreets/goconvey/convey/reporting",
"Comment": "1.5.0-356-gfbc0a1c",
"Rev": "fbc0a1c888f9f96263f9a559d1769905245f1123"
},
{
"ImportPath": "github.com/streadway/amqp",
"Rev": "150b7f24d6ad507e6026c13d85ce1f1391ac7400"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "972f0c5fbe4ae29e666c3f78c3ed42ae7a448b0a"
},
{
"ImportPath": "golang.org/x/oauth2",
"Rev": "c58fcf0ffc1c772aa2e1ee4894bc19f2649263b2"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "7a56174f0086b32866ebd746a794417edbc678a1"
},
{
"ImportPath": "gopkg.in/asn1-ber.v1",
"Comment": "v1",
"Rev": "9eae18c3681ae3d3c677ac2b80a8fe57de45fc09"
},
{
"ImportPath": "gopkg.in/bufio.v1",
"Comment": "v1",
"Rev": "567b2bfa514e796916c4747494d6ff5132a1dfce"
},
{
"ImportPath": "gopkg.in/ini.v1",
"Comment": "v0-16-g1772191",
"Rev": "177219109c97e7920c933e21c9b25f874357b237"
},
{
"ImportPath": "gopkg.in/macaron.v1",
"Rev": "1c6dd87797ae9319b4658cbd48d1d0420b279fd5"
},
{
"ImportPath": "gopkg.in/redis.v2",
"Comment": "v2.3.2",
"Rev": "e6179049628164864e6e84e973cfb56335748dea"
}
]
}

5
Godeps/Readme generated
View File

@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

2
Godeps/_workspace/.gitignore generated vendored
View File

@ -1,2 +0,0 @@
/pkg
/bin

View File

@ -1,11 +0,0 @@
dist
/doc
/doc-staging
.yardoc
Gemfile.lock
awstesting/integration/smoke/**/importmarker__.go
awstesting/integration/smoke/_test/
/vendor/bin/
/vendor/pkg/
/vendor/src/
/private/model/cli/gen-api/gen-api

View File

@ -1,14 +0,0 @@
{
"PkgHandler": {
"Pattern": "/sdk-for-go/api/",
"StripPrefix": "/sdk-for-go/api",
"Include": ["/src/github.com/aws/aws-sdk-go/aws", "/src/github.com/aws/aws-sdk-go/service"],
"Exclude": ["/src/cmd", "/src/github.com/aws/aws-sdk-go/awstesting", "/src/github.com/aws/aws-sdk-go/awsmigrate"],
"IgnoredSuffixes": ["iface"]
},
"Github": {
"Tag": "master",
"Repo": "/aws/aws-sdk-go",
"UseGithub": true
}
}

View File

@ -1,23 +0,0 @@
language: go
sudo: false
go:
- 1.4
- 1.5
- 1.6
- tip
# Use Go 1.5's vendoring experiment for 1.5 tests. 1.4 tests will use the tip of the dependencies repo.
env:
- GO15VENDOREXPERIMENT=1
install:
- make get-deps
script:
- make unit-with-race-cover
matrix:
allow_failures:
- go: tip

View File

@ -1,7 +0,0 @@
--plugin go
-e doc-src/plugin/plugin.rb
-m markdown
-o doc/api
--title "AWS SDK for Go"
aws/**/*.go
service/**/*.go

View File

@ -1,233 +0,0 @@
package awsutil_test
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"testing"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/stretchr/testify/assert"
)
func ExampleCopy() {
type Foo struct {
A int
B []*string
}
// Create the initial value
str1 := "hello"
str2 := "bye bye"
f1 := &Foo{A: 1, B: []*string{&str1, &str2}}
// Do the copy
var f2 Foo
awsutil.Copy(&f2, f1)
// Print the result
fmt.Println(awsutil.Prettify(f2))
// Output:
// {
// A: 1,
// B: ["hello","bye bye"]
// }
}
func TestCopy(t *testing.T) {
type Foo struct {
A int
B []*string
C map[string]*int
}
// Create the initial value
str1 := "hello"
str2 := "bye bye"
int1 := 1
int2 := 2
f1 := &Foo{
A: 1,
B: []*string{&str1, &str2},
C: map[string]*int{
"A": &int1,
"B": &int2,
},
}
// Do the copy
var f2 Foo
awsutil.Copy(&f2, f1)
// Values are equal
assert.Equal(t, f2.A, f1.A)
assert.Equal(t, f2.B, f1.B)
assert.Equal(t, f2.C, f1.C)
// But pointers are not!
str3 := "nothello"
int3 := 57
f2.A = 100
f2.B[0] = &str3
f2.C["B"] = &int3
assert.NotEqual(t, f2.A, f1.A)
assert.NotEqual(t, f2.B, f1.B)
assert.NotEqual(t, f2.C, f1.C)
}
func TestCopyNestedWithUnexported(t *testing.T) {
type Bar struct {
a int
B int
}
type Foo struct {
A string
B Bar
}
f1 := &Foo{A: "string", B: Bar{a: 1, B: 2}}
var f2 Foo
awsutil.Copy(&f2, f1)
// Values match
assert.Equal(t, f2.A, f1.A)
assert.NotEqual(t, f2.B, f1.B)
assert.NotEqual(t, f2.B.a, f1.B.a)
assert.Equal(t, f2.B.B, f2.B.B)
}
func TestCopyIgnoreNilMembers(t *testing.T) {
type Foo struct {
A *string
B []string
C map[string]string
}
f := &Foo{}
assert.Nil(t, f.A)
assert.Nil(t, f.B)
assert.Nil(t, f.C)
var f2 Foo
awsutil.Copy(&f2, f)
assert.Nil(t, f2.A)
assert.Nil(t, f2.B)
assert.Nil(t, f2.C)
fcopy := awsutil.CopyOf(f)
f3 := fcopy.(*Foo)
assert.Nil(t, f3.A)
assert.Nil(t, f3.B)
assert.Nil(t, f3.C)
}
func TestCopyPrimitive(t *testing.T) {
str := "hello"
var s string
awsutil.Copy(&s, &str)
assert.Equal(t, "hello", s)
}
func TestCopyNil(t *testing.T) {
var s string
awsutil.Copy(&s, nil)
assert.Equal(t, "", s)
}
func TestCopyReader(t *testing.T) {
var buf io.Reader = bytes.NewReader([]byte("hello world"))
var r io.Reader
awsutil.Copy(&r, buf)
b, err := ioutil.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, []byte("hello world"), b)
// empty bytes because this is not a deep copy
b, err = ioutil.ReadAll(buf)
assert.NoError(t, err)
assert.Equal(t, []byte(""), b)
}
func TestCopyDifferentStructs(t *testing.T) {
type SrcFoo struct {
A int
B []*string
C map[string]*int
SrcUnique string
SameNameDiffType int
unexportedPtr *int
ExportedPtr *int
}
type DstFoo struct {
A int
B []*string
C map[string]*int
DstUnique int
SameNameDiffType string
unexportedPtr *int
ExportedPtr *int
}
// Create the initial value
str1 := "hello"
str2 := "bye bye"
int1 := 1
int2 := 2
f1 := &SrcFoo{
A: 1,
B: []*string{&str1, &str2},
C: map[string]*int{
"A": &int1,
"B": &int2,
},
SrcUnique: "unique",
SameNameDiffType: 1,
unexportedPtr: &int1,
ExportedPtr: &int2,
}
// Do the copy
var f2 DstFoo
awsutil.Copy(&f2, f1)
// Values are equal
assert.Equal(t, f2.A, f1.A)
assert.Equal(t, f2.B, f1.B)
assert.Equal(t, f2.C, f1.C)
assert.Equal(t, "unique", f1.SrcUnique)
assert.Equal(t, 1, f1.SameNameDiffType)
assert.Equal(t, 0, f2.DstUnique)
assert.Equal(t, "", f2.SameNameDiffType)
assert.Equal(t, int1, *f1.unexportedPtr)
assert.Nil(t, f2.unexportedPtr)
assert.Equal(t, int2, *f1.ExportedPtr)
assert.Equal(t, int2, *f2.ExportedPtr)
}
func ExampleCopyOf() {
type Foo struct {
A int
B []*string
}
// Create the initial value
str1 := "hello"
str2 := "bye bye"
f1 := &Foo{A: 1, B: []*string{&str1, &str2}}
// Do the copy
v := awsutil.CopyOf(f1)
var f2 *Foo = v.(*Foo)
// Print the result
fmt.Println(awsutil.Prettify(f2))
// Output:
// {
// A: 1,
// B: ["hello","bye bye"]
// }
}

View File

@ -1,29 +0,0 @@
package awsutil_test
import (
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/stretchr/testify/assert"
)
func TestDeepEqual(t *testing.T) {
cases := []struct {
a, b interface{}
equal bool
}{
{"a", "a", true},
{"a", "b", false},
{"a", aws.String(""), false},
{"a", nil, false},
{"a", aws.String("a"), true},
{(*bool)(nil), (*bool)(nil), true},
{(*bool)(nil), (*string)(nil), false},
{nil, nil, true},
}
for i, c := range cases {
assert.Equal(t, c.equal, awsutil.DeepEqual(c.a, c.b), "%d, a:%v b:%v, %t", i, c.a, c.b, c.equal)
}
}

View File

@ -1,142 +0,0 @@
package awsutil_test
import (
"testing"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/stretchr/testify/assert"
)
type Struct struct {
A []Struct
z []Struct
B *Struct
D *Struct
C string
E map[string]string
}
var data = Struct{
A: []Struct{{C: "value1"}, {C: "value2"}, {C: "value3"}},
z: []Struct{{C: "value1"}, {C: "value2"}, {C: "value3"}},
B: &Struct{B: &Struct{C: "terminal"}, D: &Struct{C: "terminal2"}},
C: "initial",
}
var data2 = Struct{A: []Struct{
{A: []Struct{{C: "1"}, {C: "1"}, {C: "1"}, {C: "1"}, {C: "1"}}},
{A: []Struct{{C: "2"}, {C: "2"}, {C: "2"}, {C: "2"}, {C: "2"}}},
}}
func TestValueAtPathSuccess(t *testing.T) {
var testCases = []struct {
expect []interface{}
data interface{}
path string
}{
{[]interface{}{"initial"}, data, "C"},
{[]interface{}{"value1"}, data, "A[0].C"},
{[]interface{}{"value2"}, data, "A[1].C"},
{[]interface{}{"value3"}, data, "A[2].C"},
{[]interface{}{"value3"}, data, "a[2].c"},
{[]interface{}{"value3"}, data, "A[-1].C"},
{[]interface{}{"value1", "value2", "value3"}, data, "A[].C"},
{[]interface{}{"terminal"}, data, "B . B . C"},
{[]interface{}{"initial"}, data, "A.D.X || C"},
{[]interface{}{"initial"}, data, "A[0].B || C"},
{[]interface{}{
Struct{A: []Struct{{C: "1"}, {C: "1"}, {C: "1"}, {C: "1"}, {C: "1"}}},
Struct{A: []Struct{{C: "2"}, {C: "2"}, {C: "2"}, {C: "2"}, {C: "2"}}},
}, data2, "A"},
}
for i, c := range testCases {
v, err := awsutil.ValuesAtPath(c.data, c.path)
assert.NoError(t, err, "case %d, expected no error, %s", i, c.path)
assert.Equal(t, c.expect, v, "case %d, %s", i, c.path)
}
}
func TestValueAtPathFailure(t *testing.T) {
var testCases = []struct {
expect []interface{}
errContains string
data interface{}
path string
}{
{nil, "", data, "C.x"},
{nil, "SyntaxError: Invalid token: tDot", data, ".x"},
{nil, "", data, "X.Y.Z"},
{nil, "", data, "A[100].C"},
{nil, "", data, "A[3].C"},
{nil, "", data, "B.B.C.Z"},
{nil, "", data, "z[-1].C"},
{nil, "", nil, "A.B.C"},
{[]interface{}{}, "", Struct{}, "A"},
{nil, "", data, "A[0].B.C"},
{nil, "", data, "D"},
}
for i, c := range testCases {
v, err := awsutil.ValuesAtPath(c.data, c.path)
if c.errContains != "" {
assert.Contains(t, err.Error(), c.errContains, "case %d, expected error, %s", i, c.path)
continue
} else {
assert.NoError(t, err, "case %d, expected no error, %s", i, c.path)
}
assert.Equal(t, c.expect, v, "case %d, %s", i, c.path)
}
}
func TestSetValueAtPathSuccess(t *testing.T) {
var s Struct
awsutil.SetValueAtPath(&s, "C", "test1")
awsutil.SetValueAtPath(&s, "B.B.C", "test2")
awsutil.SetValueAtPath(&s, "B.D.C", "test3")
assert.Equal(t, "test1", s.C)
assert.Equal(t, "test2", s.B.B.C)
assert.Equal(t, "test3", s.B.D.C)
awsutil.SetValueAtPath(&s, "B.*.C", "test0")
assert.Equal(t, "test0", s.B.B.C)
assert.Equal(t, "test0", s.B.D.C)
var s2 Struct
awsutil.SetValueAtPath(&s2, "b.b.c", "test0")
assert.Equal(t, "test0", s2.B.B.C)
awsutil.SetValueAtPath(&s2, "A", []Struct{{}})
assert.Equal(t, []Struct{{}}, s2.A)
str := "foo"
s3 := Struct{}
awsutil.SetValueAtPath(&s3, "b.b.c", str)
assert.Equal(t, "foo", s3.B.B.C)
s3 = Struct{B: &Struct{B: &Struct{C: str}}}
awsutil.SetValueAtPath(&s3, "b.b.c", nil)
assert.Equal(t, "", s3.B.B.C)
s3 = Struct{}
awsutil.SetValueAtPath(&s3, "b.b.c", nil)
assert.Equal(t, "", s3.B.B.C)
s3 = Struct{}
awsutil.SetValueAtPath(&s3, "b.b.c", &str)
assert.Equal(t, "foo", s3.B.B.C)
var s4 struct{ Name *string }
awsutil.SetValueAtPath(&s4, "Name", str)
assert.Equal(t, str, *s4.Name)
s4 = struct{ Name *string }{}
awsutil.SetValueAtPath(&s4, "Name", nil)
assert.Equal(t, (*string)(nil), s4.Name)
s4 = struct{ Name *string }{Name: &str}
awsutil.SetValueAtPath(&s4, "Name", nil)
assert.Equal(t, (*string)(nil), s4.Name)
s4 = struct{ Name *string }{}
awsutil.SetValueAtPath(&s4, "Name", &str)
assert.Equal(t, str, *s4.Name)
}

View File

@ -1,86 +0,0 @@
package aws
import (
"net/http"
"reflect"
"testing"
"github.com/aws/aws-sdk-go/aws/credentials"
)
var testCredentials = credentials.NewStaticCredentials("AKID", "SECRET", "SESSION")
var copyTestConfig = Config{
Credentials: testCredentials,
Endpoint: String("CopyTestEndpoint"),
Region: String("COPY_TEST_AWS_REGION"),
DisableSSL: Bool(true),
HTTPClient: http.DefaultClient,
LogLevel: LogLevel(LogDebug),
Logger: NewDefaultLogger(),
MaxRetries: Int(3),
DisableParamValidation: Bool(true),
DisableComputeChecksums: Bool(true),
S3ForcePathStyle: Bool(true),
}
func TestCopy(t *testing.T) {
want := copyTestConfig
got := copyTestConfig.Copy()
if !reflect.DeepEqual(*got, want) {
t.Errorf("Copy() = %+v", got)
t.Errorf(" want %+v", want)
}
got.Region = String("other")
if got.Region == want.Region {
t.Errorf("Expect setting copy values not not reflect in source")
}
}
func TestCopyReturnsNewInstance(t *testing.T) {
want := copyTestConfig
got := copyTestConfig.Copy()
if got == &want {
t.Errorf("Copy() = %p; want different instance as source %p", got, &want)
}
}
var mergeTestZeroValueConfig = Config{}
var mergeTestConfig = Config{
Credentials: testCredentials,
Endpoint: String("MergeTestEndpoint"),
Region: String("MERGE_TEST_AWS_REGION"),
DisableSSL: Bool(true),
HTTPClient: http.DefaultClient,
LogLevel: LogLevel(LogDebug),
Logger: NewDefaultLogger(),
MaxRetries: Int(10),
DisableParamValidation: Bool(true),
DisableComputeChecksums: Bool(true),
S3ForcePathStyle: Bool(true),
}
var mergeTests = []struct {
cfg *Config
in *Config
want *Config
}{
{&Config{}, nil, &Config{}},
{&Config{}, &mergeTestZeroValueConfig, &Config{}},
{&Config{}, &mergeTestConfig, &mergeTestConfig},
}
func TestMerge(t *testing.T) {
for i, tt := range mergeTests {
got := tt.cfg.Copy()
got.MergeIn(tt.in)
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("Config %d %+v", i, tt.cfg)
t.Errorf(" Merge(%+v)", tt.in)
t.Errorf(" got %+v", got)
t.Errorf(" want %+v", tt.want)
}
}
}

View File

@ -1,437 +0,0 @@
package aws
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
var testCasesStringSlice = [][]string{
{"a", "b", "c", "d", "e"},
{"a", "b", "", "", "e"},
}
func TestStringSlice(t *testing.T) {
for idx, in := range testCasesStringSlice {
if in == nil {
continue
}
out := StringSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := StringValueSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesStringValueSlice = [][]*string{
{String("a"), String("b"), nil, String("c")},
}
func TestStringValueSlice(t *testing.T) {
for idx, in := range testCasesStringValueSlice {
if in == nil {
continue
}
out := StringValueSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
if in[i] == nil {
assert.Empty(t, out[i], "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, *(in[i]), out[i], "Unexpected value at idx %d", idx)
}
}
out2 := StringSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
for i := range out2 {
if in[i] == nil {
assert.Empty(t, *(out2[i]), "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, in[i], out2[i], "Unexpected value at idx %d", idx)
}
}
}
}
var testCasesStringMap = []map[string]string{
{"a": "1", "b": "2", "c": "3"},
}
func TestStringMap(t *testing.T) {
for idx, in := range testCasesStringMap {
if in == nil {
continue
}
out := StringMap(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := StringValueMap(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesBoolSlice = [][]bool{
{true, true, false, false},
}
func TestBoolSlice(t *testing.T) {
for idx, in := range testCasesBoolSlice {
if in == nil {
continue
}
out := BoolSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := BoolValueSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesBoolValueSlice = [][]*bool{}
func TestBoolValueSlice(t *testing.T) {
for idx, in := range testCasesBoolValueSlice {
if in == nil {
continue
}
out := BoolValueSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
if in[i] == nil {
assert.Empty(t, out[i], "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, *(in[i]), out[i], "Unexpected value at idx %d", idx)
}
}
out2 := BoolSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
for i := range out2 {
if in[i] == nil {
assert.Empty(t, *(out2[i]), "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, in[i], out2[i], "Unexpected value at idx %d", idx)
}
}
}
}
var testCasesBoolMap = []map[string]bool{
{"a": true, "b": false, "c": true},
}
func TestBoolMap(t *testing.T) {
for idx, in := range testCasesBoolMap {
if in == nil {
continue
}
out := BoolMap(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := BoolValueMap(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesIntSlice = [][]int{
{1, 2, 3, 4},
}
func TestIntSlice(t *testing.T) {
for idx, in := range testCasesIntSlice {
if in == nil {
continue
}
out := IntSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := IntValueSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesIntValueSlice = [][]*int{}
func TestIntValueSlice(t *testing.T) {
for idx, in := range testCasesIntValueSlice {
if in == nil {
continue
}
out := IntValueSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
if in[i] == nil {
assert.Empty(t, out[i], "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, *(in[i]), out[i], "Unexpected value at idx %d", idx)
}
}
out2 := IntSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
for i := range out2 {
if in[i] == nil {
assert.Empty(t, *(out2[i]), "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, in[i], out2[i], "Unexpected value at idx %d", idx)
}
}
}
}
var testCasesIntMap = []map[string]int{
{"a": 3, "b": 2, "c": 1},
}
func TestIntMap(t *testing.T) {
for idx, in := range testCasesIntMap {
if in == nil {
continue
}
out := IntMap(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := IntValueMap(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesInt64Slice = [][]int64{
{1, 2, 3, 4},
}
func TestInt64Slice(t *testing.T) {
for idx, in := range testCasesInt64Slice {
if in == nil {
continue
}
out := Int64Slice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := Int64ValueSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesInt64ValueSlice = [][]*int64{}
func TestInt64ValueSlice(t *testing.T) {
for idx, in := range testCasesInt64ValueSlice {
if in == nil {
continue
}
out := Int64ValueSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
if in[i] == nil {
assert.Empty(t, out[i], "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, *(in[i]), out[i], "Unexpected value at idx %d", idx)
}
}
out2 := Int64Slice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
for i := range out2 {
if in[i] == nil {
assert.Empty(t, *(out2[i]), "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, in[i], out2[i], "Unexpected value at idx %d", idx)
}
}
}
}
var testCasesInt64Map = []map[string]int64{
{"a": 3, "b": 2, "c": 1},
}
func TestInt64Map(t *testing.T) {
for idx, in := range testCasesInt64Map {
if in == nil {
continue
}
out := Int64Map(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := Int64ValueMap(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesFloat64Slice = [][]float64{
{1, 2, 3, 4},
}
func TestFloat64Slice(t *testing.T) {
for idx, in := range testCasesFloat64Slice {
if in == nil {
continue
}
out := Float64Slice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := Float64ValueSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesFloat64ValueSlice = [][]*float64{}
func TestFloat64ValueSlice(t *testing.T) {
for idx, in := range testCasesFloat64ValueSlice {
if in == nil {
continue
}
out := Float64ValueSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
if in[i] == nil {
assert.Empty(t, out[i], "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, *(in[i]), out[i], "Unexpected value at idx %d", idx)
}
}
out2 := Float64Slice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
for i := range out2 {
if in[i] == nil {
assert.Empty(t, *(out2[i]), "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, in[i], out2[i], "Unexpected value at idx %d", idx)
}
}
}
}
var testCasesFloat64Map = []map[string]float64{
{"a": 3, "b": 2, "c": 1},
}
func TestFloat64Map(t *testing.T) {
for idx, in := range testCasesFloat64Map {
if in == nil {
continue
}
out := Float64Map(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := Float64ValueMap(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesTimeSlice = [][]time.Time{
{time.Now(), time.Now().AddDate(100, 0, 0)},
}
func TestTimeSlice(t *testing.T) {
for idx, in := range testCasesTimeSlice {
if in == nil {
continue
}
out := TimeSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := TimeValueSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}
var testCasesTimeValueSlice = [][]*time.Time{}
func TestTimeValueSlice(t *testing.T) {
for idx, in := range testCasesTimeValueSlice {
if in == nil {
continue
}
out := TimeValueSlice(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
if in[i] == nil {
assert.Empty(t, out[i], "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, *(in[i]), out[i], "Unexpected value at idx %d", idx)
}
}
out2 := TimeSlice(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
for i := range out2 {
if in[i] == nil {
assert.Empty(t, *(out2[i]), "Unexpected value at idx %d", idx)
} else {
assert.Equal(t, in[i], out2[i], "Unexpected value at idx %d", idx)
}
}
}
}
var testCasesTimeMap = []map[string]time.Time{
{"a": time.Now().AddDate(-100, 0, 0), "b": time.Now()},
}
func TestTimeMap(t *testing.T) {
for idx, in := range testCasesTimeMap {
if in == nil {
continue
}
out := TimeMap(in)
assert.Len(t, out, len(in), "Unexpected len at idx %d", idx)
for i := range out {
assert.Equal(t, in[i], *(out[i]), "Unexpected value at idx %d", idx)
}
out2 := TimeValueMap(out)
assert.Len(t, out2, len(in), "Unexpected len at idx %d", idx)
assert.Equal(t, in, out2, "Unexpected value at idx %d", idx)
}
}

View File

@ -1,192 +0,0 @@
package corehandlers_test
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting"
"github.com/aws/aws-sdk-go/awstesting/unit"
"github.com/aws/aws-sdk-go/service/s3"
)
func TestValidateEndpointHandler(t *testing.T) {
os.Clearenv()
svc := awstesting.NewClient(aws.NewConfig().WithRegion("us-west-2"))
svc.Handlers.Clear()
svc.Handlers.Validate.PushBackNamed(corehandlers.ValidateEndpointHandler)
req := svc.NewRequest(&request.Operation{Name: "Operation"}, nil, nil)
err := req.Build()
assert.NoError(t, err)
}
func TestValidateEndpointHandlerErrorRegion(t *testing.T) {
os.Clearenv()
svc := awstesting.NewClient()
svc.Handlers.Clear()
svc.Handlers.Validate.PushBackNamed(corehandlers.ValidateEndpointHandler)
req := svc.NewRequest(&request.Operation{Name: "Operation"}, nil, nil)
err := req.Build()
assert.Error(t, err)
assert.Equal(t, aws.ErrMissingRegion, err)
}
type mockCredsProvider struct {
expired bool
retrieveCalled bool
}
func (m *mockCredsProvider) Retrieve() (credentials.Value, error) {
m.retrieveCalled = true
return credentials.Value{ProviderName: "mockCredsProvider"}, nil
}
func (m *mockCredsProvider) IsExpired() bool {
return m.expired
}
func TestAfterRetryRefreshCreds(t *testing.T) {
os.Clearenv()
credProvider := &mockCredsProvider{}
svc := awstesting.NewClient(&aws.Config{
Credentials: credentials.NewCredentials(credProvider),
MaxRetries: aws.Int(1),
})
svc.Handlers.Clear()
svc.Handlers.ValidateResponse.PushBack(func(r *request.Request) {
r.Error = awserr.New("UnknownError", "", nil)
r.HTTPResponse = &http.Response{StatusCode: 400, Body: ioutil.NopCloser(bytes.NewBuffer([]byte{}))}
})
svc.Handlers.UnmarshalError.PushBack(func(r *request.Request) {
r.Error = awserr.New("ExpiredTokenException", "", nil)
})
svc.Handlers.AfterRetry.PushBackNamed(corehandlers.AfterRetryHandler)
assert.True(t, svc.Config.Credentials.IsExpired(), "Expect to start out expired")
assert.False(t, credProvider.retrieveCalled)
req := svc.NewRequest(&request.Operation{Name: "Operation"}, nil, nil)
req.Send()
assert.True(t, svc.Config.Credentials.IsExpired())
assert.False(t, credProvider.retrieveCalled)
_, err := svc.Config.Credentials.Get()
assert.NoError(t, err)
assert.True(t, credProvider.retrieveCalled)
}
type testSendHandlerTransport struct{}
func (t *testSendHandlerTransport) RoundTrip(r *http.Request) (*http.Response, error) {
return nil, fmt.Errorf("mock error")
}
func TestSendHandlerError(t *testing.T) {
svc := awstesting.NewClient(&aws.Config{
HTTPClient: &http.Client{
Transport: &testSendHandlerTransport{},
},
})
svc.Handlers.Clear()
svc.Handlers.Send.PushBackNamed(corehandlers.SendHandler)
r := svc.NewRequest(&request.Operation{Name: "Operation"}, nil, nil)
r.Send()
assert.Error(t, r.Error)
assert.NotNil(t, r.HTTPResponse)
}
func setupContentLengthTestServer(t *testing.T, hasContentLength bool, contentLength int64) *httptest.Server {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, ok := r.Header["Content-Length"]
assert.Equal(t, hasContentLength, ok, "expect content length to be set, %t", hasContentLength)
assert.Equal(t, contentLength, r.ContentLength)
b, err := ioutil.ReadAll(r.Body)
assert.NoError(t, err)
r.Body.Close()
authHeader := r.Header.Get("Authorization")
if hasContentLength {
assert.Contains(t, authHeader, "content-length")
} else {
assert.NotContains(t, authHeader, "content-length")
}
assert.Equal(t, contentLength, int64(len(b)))
}))
return server
}
func TestBuildContentLength_ZeroBody(t *testing.T) {
server := setupContentLengthTestServer(t, false, 0)
svc := s3.New(unit.Session, &aws.Config{
Endpoint: aws.String(server.URL),
S3ForcePathStyle: aws.Bool(true),
DisableSSL: aws.Bool(true),
})
_, err := svc.GetObject(&s3.GetObjectInput{
Bucket: aws.String("bucketname"),
Key: aws.String("keyname"),
})
assert.NoError(t, err)
}
func TestBuildContentLength_NegativeBody(t *testing.T) {
server := setupContentLengthTestServer(t, false, 0)
svc := s3.New(unit.Session, &aws.Config{
Endpoint: aws.String(server.URL),
S3ForcePathStyle: aws.Bool(true),
DisableSSL: aws.Bool(true),
})
req, _ := svc.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String("bucketname"),
Key: aws.String("keyname"),
})
req.HTTPRequest.Header.Set("Content-Length", "-1")
assert.NoError(t, req.Send())
}
func TestBuildContentLength_WithBody(t *testing.T) {
server := setupContentLengthTestServer(t, true, 1024)
svc := s3.New(unit.Session, &aws.Config{
Endpoint: aws.String(server.URL),
S3ForcePathStyle: aws.Bool(true),
DisableSSL: aws.Bool(true),
})
_, err := svc.PutObject(&s3.PutObjectInput{
Bucket: aws.String("bucketname"),
Key: aws.String("keyname"),
Body: bytes.NewReader(make([]byte, 1024)),
})
assert.NoError(t, err)
}

View File

@ -1,254 +0,0 @@
package corehandlers_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/client"
"github.com/aws/aws-sdk-go/aws/client/metadata"
"github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting/unit"
"github.com/aws/aws-sdk-go/service/kinesis"
"github.com/stretchr/testify/require"
)
var testSvc = func() *client.Client {
s := &client.Client{
Config: aws.Config{},
ClientInfo: metadata.ClientInfo{
ServiceName: "mock-service",
APIVersion: "2015-01-01",
},
}
return s
}()
type StructShape struct {
_ struct{} `type:"structure"`
RequiredList []*ConditionalStructShape `required:"true"`
RequiredMap map[string]*ConditionalStructShape `required:"true"`
RequiredBool *bool `required:"true"`
OptionalStruct *ConditionalStructShape
hiddenParameter *string
}
func (s *StructShape) Validate() error {
invalidParams := request.ErrInvalidParams{Context: "StructShape"}
if s.RequiredList == nil {
invalidParams.Add(request.NewErrParamRequired("RequiredList"))
}
if s.RequiredMap == nil {
invalidParams.Add(request.NewErrParamRequired("RequiredMap"))
}
if s.RequiredBool == nil {
invalidParams.Add(request.NewErrParamRequired("RequiredBool"))
}
if s.RequiredList != nil {
for i, v := range s.RequiredList {
if v == nil {
continue
}
if err := v.Validate(); err != nil {
invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RequiredList", i), err.(request.ErrInvalidParams))
}
}
}
if s.RequiredMap != nil {
for i, v := range s.RequiredMap {
if v == nil {
continue
}
if err := v.Validate(); err != nil {
invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RequiredMap", i), err.(request.ErrInvalidParams))
}
}
}
if s.OptionalStruct != nil {
if err := s.OptionalStruct.Validate(); err != nil {
invalidParams.AddNested("OptionalStruct", err.(request.ErrInvalidParams))
}
}
if invalidParams.Len() > 0 {
return invalidParams
}
return nil
}
type ConditionalStructShape struct {
_ struct{} `type:"structure"`
Name *string `required:"true"`
}
func (s *ConditionalStructShape) Validate() error {
invalidParams := request.ErrInvalidParams{Context: "ConditionalStructShape"}
if s.Name == nil {
invalidParams.Add(request.NewErrParamRequired("Name"))
}
if invalidParams.Len() > 0 {
return invalidParams
}
return nil
}
func TestNoErrors(t *testing.T) {
input := &StructShape{
RequiredList: []*ConditionalStructShape{},
RequiredMap: map[string]*ConditionalStructShape{
"key1": {Name: aws.String("Name")},
"key2": {Name: aws.String("Name")},
},
RequiredBool: aws.Bool(true),
OptionalStruct: &ConditionalStructShape{Name: aws.String("Name")},
}
req := testSvc.NewRequest(&request.Operation{}, input, nil)
corehandlers.ValidateParametersHandler.Fn(req)
require.NoError(t, req.Error)
}
func TestMissingRequiredParameters(t *testing.T) {
input := &StructShape{}
req := testSvc.NewRequest(&request.Operation{}, input, nil)
corehandlers.ValidateParametersHandler.Fn(req)
require.Error(t, req.Error)
assert.Equal(t, "InvalidParameter", req.Error.(awserr.Error).Code())
assert.Equal(t, "3 validation error(s) found.", req.Error.(awserr.Error).Message())
errs := req.Error.(awserr.BatchedErrors).OrigErrs()
assert.Len(t, errs, 3)
assert.Equal(t, "ParamRequiredError: missing required field, StructShape.RequiredList.", errs[0].Error())
assert.Equal(t, "ParamRequiredError: missing required field, StructShape.RequiredMap.", errs[1].Error())
assert.Equal(t, "ParamRequiredError: missing required field, StructShape.RequiredBool.", errs[2].Error())
assert.Equal(t, "InvalidParameter: 3 validation error(s) found.\n- missing required field, StructShape.RequiredList.\n- missing required field, StructShape.RequiredMap.\n- missing required field, StructShape.RequiredBool.\n", req.Error.Error())
}
func TestNestedMissingRequiredParameters(t *testing.T) {
input := &StructShape{
RequiredList: []*ConditionalStructShape{{}},
RequiredMap: map[string]*ConditionalStructShape{
"key1": {Name: aws.String("Name")},
"key2": {},
},
RequiredBool: aws.Bool(true),
OptionalStruct: &ConditionalStructShape{},
}
req := testSvc.NewRequest(&request.Operation{}, input, nil)
corehandlers.ValidateParametersHandler.Fn(req)
require.Error(t, req.Error)
assert.Equal(t, "InvalidParameter", req.Error.(awserr.Error).Code())
assert.Equal(t, "3 validation error(s) found.", req.Error.(awserr.Error).Message())
errs := req.Error.(awserr.BatchedErrors).OrigErrs()
assert.Len(t, errs, 3)
assert.Equal(t, "ParamRequiredError: missing required field, StructShape.RequiredList[0].Name.", errs[0].Error())
assert.Equal(t, "ParamRequiredError: missing required field, StructShape.RequiredMap[key2].Name.", errs[1].Error())
assert.Equal(t, "ParamRequiredError: missing required field, StructShape.OptionalStruct.Name.", errs[2].Error())
}
type testInput struct {
StringField *string `min:"5"`
ListField []string `min:"3"`
MapField map[string]string `min:"4"`
}
func (s testInput) Validate() error {
invalidParams := request.ErrInvalidParams{Context: "testInput"}
if s.StringField != nil && len(*s.StringField) < 5 {
invalidParams.Add(request.NewErrParamMinLen("StringField", 5))
}
if s.ListField != nil && len(s.ListField) < 3 {
invalidParams.Add(request.NewErrParamMinLen("ListField", 3))
}
if s.MapField != nil && len(s.MapField) < 4 {
invalidParams.Add(request.NewErrParamMinLen("MapField", 4))
}
if invalidParams.Len() > 0 {
return invalidParams
}
return nil
}
var testsFieldMin = []struct {
err awserr.Error
in testInput
}{
{
err: func() awserr.Error {
invalidParams := request.ErrInvalidParams{Context: "testInput"}
invalidParams.Add(request.NewErrParamMinLen("StringField", 5))
return invalidParams
}(),
in: testInput{StringField: aws.String("abcd")},
},
{
err: func() awserr.Error {
invalidParams := request.ErrInvalidParams{Context: "testInput"}
invalidParams.Add(request.NewErrParamMinLen("StringField", 5))
invalidParams.Add(request.NewErrParamMinLen("ListField", 3))
return invalidParams
}(),
in: testInput{StringField: aws.String("abcd"), ListField: []string{"a", "b"}},
},
{
err: func() awserr.Error {
invalidParams := request.ErrInvalidParams{Context: "testInput"}
invalidParams.Add(request.NewErrParamMinLen("StringField", 5))
invalidParams.Add(request.NewErrParamMinLen("ListField", 3))
invalidParams.Add(request.NewErrParamMinLen("MapField", 4))
return invalidParams
}(),
in: testInput{StringField: aws.String("abcd"), ListField: []string{"a", "b"}, MapField: map[string]string{"a": "a", "b": "b"}},
},
{
err: nil,
in: testInput{StringField: aws.String("abcde"),
ListField: []string{"a", "b", "c"}, MapField: map[string]string{"a": "a", "b": "b", "c": "c", "d": "d"}},
},
}
func TestValidateFieldMinParameter(t *testing.T) {
for i, c := range testsFieldMin {
req := testSvc.NewRequest(&request.Operation{}, &c.in, nil)
corehandlers.ValidateParametersHandler.Fn(req)
assert.Equal(t, c.err, req.Error, "%d case failed", i)
}
}
func BenchmarkValidateAny(b *testing.B) {
input := &kinesis.PutRecordsInput{
StreamName: aws.String("stream"),
}
for i := 0; i < 100; i++ {
record := &kinesis.PutRecordsRequestEntry{
Data: make([]byte, 10000),
PartitionKey: aws.String("partition"),
}
input.Records = append(input.Records, record)
}
req, _ := kinesis.New(unit.Session).PutRecordsRequest(input)
b.ResetTimer()
for i := 0; i < b.N; i++ {
corehandlers.ValidateParametersHandler.Fn(req)
if err := req.Error; err != nil {
b.Fatalf("validation failed: %v", err)
}
}
}

View File

@ -1,154 +0,0 @@
package credentials
import (
"testing"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/stretchr/testify/assert"
)
type secondStubProvider struct {
creds Value
expired bool
err error
}
func (s *secondStubProvider) Retrieve() (Value, error) {
s.expired = false
s.creds.ProviderName = "secondStubProvider"
return s.creds, s.err
}
func (s *secondStubProvider) IsExpired() bool {
return s.expired
}
func TestChainProviderWithNames(t *testing.T) {
p := &ChainProvider{
Providers: []Provider{
&stubProvider{err: awserr.New("FirstError", "first provider error", nil)},
&stubProvider{err: awserr.New("SecondError", "second provider error", nil)},
&secondStubProvider{
creds: Value{
AccessKeyID: "AKIF",
SecretAccessKey: "NOSECRET",
SessionToken: "",
},
},
&stubProvider{
creds: Value{
AccessKeyID: "AKID",
SecretAccessKey: "SECRET",
SessionToken: "",
},
},
},
}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "secondStubProvider", creds.ProviderName, "Expect provider name to match")
// Also check credentials
assert.Equal(t, "AKIF", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "NOSECRET", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect session token to be empty")
}
func TestChainProviderGet(t *testing.T) {
p := &ChainProvider{
Providers: []Provider{
&stubProvider{err: awserr.New("FirstError", "first provider error", nil)},
&stubProvider{err: awserr.New("SecondError", "second provider error", nil)},
&stubProvider{
creds: Value{
AccessKeyID: "AKID",
SecretAccessKey: "SECRET",
SessionToken: "",
},
},
},
}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "AKID", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "SECRET", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect session token to be empty")
}
func TestChainProviderIsExpired(t *testing.T) {
stubProvider := &stubProvider{expired: true}
p := &ChainProvider{
Providers: []Provider{
stubProvider,
},
}
assert.True(t, p.IsExpired(), "Expect expired to be true before any Retrieve")
_, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.False(t, p.IsExpired(), "Expect not expired after retrieve")
stubProvider.expired = true
assert.True(t, p.IsExpired(), "Expect return of expired provider")
_, err = p.Retrieve()
assert.False(t, p.IsExpired(), "Expect not expired after retrieve")
}
func TestChainProviderWithNoProvider(t *testing.T) {
p := &ChainProvider{
Providers: []Provider{},
}
assert.True(t, p.IsExpired(), "Expect expired with no providers")
_, err := p.Retrieve()
assert.Equal(t,
ErrNoValidProvidersFoundInChain,
err,
"Expect no providers error returned")
}
func TestChainProviderWithNoValidProvider(t *testing.T) {
errs := []error{
awserr.New("FirstError", "first provider error", nil),
awserr.New("SecondError", "second provider error", nil),
}
p := &ChainProvider{
Providers: []Provider{
&stubProvider{err: errs[0]},
&stubProvider{err: errs[1]},
},
}
assert.True(t, p.IsExpired(), "Expect expired with no providers")
_, err := p.Retrieve()
assert.Equal(t,
ErrNoValidProvidersFoundInChain,
err,
"Expect no providers error returned")
}
func TestChainProviderWithNoValidProviderWithVerboseEnabled(t *testing.T) {
errs := []error{
awserr.New("FirstError", "first provider error", nil),
awserr.New("SecondError", "second provider error", nil),
}
p := &ChainProvider{
VerboseErrors: true,
Providers: []Provider{
&stubProvider{err: errs[0]},
&stubProvider{err: errs[1]},
},
}
assert.True(t, p.IsExpired(), "Expect expired with no providers")
_, err := p.Retrieve()
assert.Equal(t,
awserr.NewBatchError("NoCredentialProviders", "no valid providers in chain", errs),
err,
"Expect no providers error returned")
}

View File

@ -1,73 +0,0 @@
package credentials
import (
"testing"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/stretchr/testify/assert"
)
type stubProvider struct {
creds Value
expired bool
err error
}
func (s *stubProvider) Retrieve() (Value, error) {
s.expired = false
s.creds.ProviderName = "stubProvider"
return s.creds, s.err
}
func (s *stubProvider) IsExpired() bool {
return s.expired
}
func TestCredentialsGet(t *testing.T) {
c := NewCredentials(&stubProvider{
creds: Value{
AccessKeyID: "AKID",
SecretAccessKey: "SECRET",
SessionToken: "",
},
expired: true,
})
creds, err := c.Get()
assert.Nil(t, err, "Expected no error")
assert.Equal(t, "AKID", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "SECRET", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect session token to be empty")
}
func TestCredentialsGetWithError(t *testing.T) {
c := NewCredentials(&stubProvider{err: awserr.New("provider error", "", nil), expired: true})
_, err := c.Get()
assert.Equal(t, "provider error", err.(awserr.Error).Code(), "Expected provider error")
}
func TestCredentialsExpire(t *testing.T) {
stub := &stubProvider{}
c := NewCredentials(stub)
stub.expired = false
assert.True(t, c.IsExpired(), "Expected to start out expired")
c.Expire()
assert.True(t, c.IsExpired(), "Expected to be expired")
c.forceRefresh = false
assert.False(t, c.IsExpired(), "Expected not to be expired")
stub.expired = true
assert.True(t, c.IsExpired(), "Expected to be expired")
}
func TestCredentialsGetWithProviderName(t *testing.T) {
stub := &stubProvider{}
c := NewCredentials(stub)
creds, err := c.Get()
assert.Nil(t, err, "Expected no error")
assert.Equal(t, creds.ProviderName, "stubProvider", "Expected provider name to match")
}

View File

@ -1,159 +0,0 @@
package ec2rolecreds_test
import (
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/awstesting/unit"
)
const credsRespTmpl = `{
"Code": "Success",
"Type": "AWS-HMAC",
"AccessKeyId" : "accessKey",
"SecretAccessKey" : "secret",
"Token" : "token",
"Expiration" : "%s",
"LastUpdated" : "2009-11-23T0:00:00Z"
}`
const credsFailRespTmpl = `{
"Code": "ErrorCode",
"Message": "ErrorMsg",
"LastUpdated": "2009-11-23T0:00:00Z"
}`
func initTestServer(expireOn string, failAssume bool) *httptest.Server {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/latest/meta-data/iam/security-credentials" {
fmt.Fprintln(w, "RoleName")
} else if r.URL.Path == "/latest/meta-data/iam/security-credentials/RoleName" {
if failAssume {
fmt.Fprintf(w, credsFailRespTmpl)
} else {
fmt.Fprintf(w, credsRespTmpl, expireOn)
}
} else {
http.Error(w, "bad request", http.StatusBadRequest)
}
}))
return server
}
func TestEC2RoleProvider(t *testing.T) {
server := initTestServer("2014-12-16T01:51:37Z", false)
defer server.Close()
p := &ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")}),
}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error, %v", err)
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "token", creds.SessionToken, "Expect session token to match")
}
func TestEC2RoleProviderFailAssume(t *testing.T) {
server := initTestServer("2014-12-16T01:51:37Z", true)
defer server.Close()
p := &ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")}),
}
creds, err := p.Retrieve()
assert.Error(t, err, "Expect error")
e := err.(awserr.Error)
assert.Equal(t, "ErrorCode", e.Code())
assert.Equal(t, "ErrorMsg", e.Message())
assert.Nil(t, e.OrigErr())
assert.Equal(t, "", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "", creds.SessionToken, "Expect session token to match")
}
func TestEC2RoleProviderIsExpired(t *testing.T) {
server := initTestServer("2014-12-16T01:51:37Z", false)
defer server.Close()
p := &ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")}),
}
p.CurrentTime = func() time.Time {
return time.Date(2014, 12, 15, 21, 26, 0, 0, time.UTC)
}
assert.True(t, p.IsExpired(), "Expect creds to be expired before retrieve.")
_, err := p.Retrieve()
assert.Nil(t, err, "Expect no error, %v", err)
assert.False(t, p.IsExpired(), "Expect creds to not be expired after retrieve.")
p.CurrentTime = func() time.Time {
return time.Date(3014, 12, 15, 21, 26, 0, 0, time.UTC)
}
assert.True(t, p.IsExpired(), "Expect creds to be expired.")
}
func TestEC2RoleProviderExpiryWindowIsExpired(t *testing.T) {
server := initTestServer("2014-12-16T01:51:37Z", false)
defer server.Close()
p := &ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")}),
ExpiryWindow: time.Hour * 1,
}
p.CurrentTime = func() time.Time {
return time.Date(2014, 12, 15, 0, 51, 37, 0, time.UTC)
}
assert.True(t, p.IsExpired(), "Expect creds to be expired before retrieve.")
_, err := p.Retrieve()
assert.Nil(t, err, "Expect no error, %v", err)
assert.False(t, p.IsExpired(), "Expect creds to not be expired after retrieve.")
p.CurrentTime = func() time.Time {
return time.Date(2014, 12, 16, 0, 55, 37, 0, time.UTC)
}
assert.True(t, p.IsExpired(), "Expect creds to be expired.")
}
func BenchmarkEC3RoleProvider(b *testing.B) {
server := initTestServer("2014-12-16T01:51:37Z", false)
defer server.Close()
p := &ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")}),
}
_, err := p.Retrieve()
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := p.Retrieve(); err != nil {
b.Fatal(err)
}
}
}

View File

@ -1,111 +0,0 @@
package endpointcreds_test
import (
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/credentials/endpointcreds"
"github.com/aws/aws-sdk-go/awstesting/unit"
"github.com/stretchr/testify/assert"
)
func TestRetrieveRefreshableCredentials(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
assert.Equal(t, "/path/to/endpoint", r.URL.Path)
assert.Equal(t, "application/json", r.Header.Get("Accept"))
assert.Equal(t, "else", r.URL.Query().Get("something"))
encoder := json.NewEncoder(w)
err := encoder.Encode(map[string]interface{}{
"AccessKeyID": "AKID",
"SecretAccessKey": "SECRET",
"Token": "TOKEN",
"Expiration": time.Now().Add(1 * time.Hour),
})
if err != nil {
fmt.Println("failed to write out creds", err)
}
}))
client := endpointcreds.NewProviderClient(*unit.Session.Config,
unit.Session.Handlers,
server.URL+"/path/to/endpoint?something=else",
)
creds, err := client.Retrieve()
assert.NoError(t, err)
assert.Equal(t, "AKID", creds.AccessKeyID)
assert.Equal(t, "SECRET", creds.SecretAccessKey)
assert.Equal(t, "TOKEN", creds.SessionToken)
assert.False(t, client.IsExpired())
client.(*endpointcreds.Provider).CurrentTime = func() time.Time {
return time.Now().Add(2 * time.Hour)
}
assert.True(t, client.IsExpired())
}
func TestRetrieveStaticCredentials(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
encoder := json.NewEncoder(w)
err := encoder.Encode(map[string]interface{}{
"AccessKeyID": "AKID",
"SecretAccessKey": "SECRET",
})
if err != nil {
fmt.Println("failed to write out creds", err)
}
}))
client := endpointcreds.NewProviderClient(*unit.Session.Config, unit.Session.Handlers, server.URL)
creds, err := client.Retrieve()
assert.NoError(t, err)
assert.Equal(t, "AKID", creds.AccessKeyID)
assert.Equal(t, "SECRET", creds.SecretAccessKey)
assert.Empty(t, creds.SessionToken)
assert.False(t, client.IsExpired())
}
func TestFailedRetrieveCredentials(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(400)
encoder := json.NewEncoder(w)
err := encoder.Encode(map[string]interface{}{
"Code": "Error",
"Message": "Message",
})
if err != nil {
fmt.Println("failed to write error", err)
}
}))
client := endpointcreds.NewProviderClient(*unit.Session.Config, unit.Session.Handlers, server.URL)
creds, err := client.Retrieve()
assert.Error(t, err)
aerr := err.(awserr.Error)
assert.Equal(t, "CredentialsEndpointError", aerr.Code())
assert.Equal(t, "failed to load credentials", aerr.Message())
aerr = aerr.OrigErr().(awserr.Error)
assert.Equal(t, "Error", aerr.Code())
assert.Equal(t, "Message", aerr.Message())
assert.Empty(t, creds.AccessKeyID)
assert.Empty(t, creds.SecretAccessKey)
assert.Empty(t, creds.SessionToken)
assert.True(t, client.IsExpired())
}

View File

@ -1,70 +0,0 @@
package credentials
import (
"github.com/stretchr/testify/assert"
"os"
"testing"
)
func TestEnvProviderRetrieve(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_ACCESS_KEY_ID", "access")
os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
os.Setenv("AWS_SESSION_TOKEN", "token")
e := EnvProvider{}
creds, err := e.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "access", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "token", creds.SessionToken, "Expect session token to match")
}
func TestEnvProviderIsExpired(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_ACCESS_KEY_ID", "access")
os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
os.Setenv("AWS_SESSION_TOKEN", "token")
e := EnvProvider{}
assert.True(t, e.IsExpired(), "Expect creds to be expired before retrieve.")
_, err := e.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.False(t, e.IsExpired(), "Expect creds to not be expired after retrieve.")
}
func TestEnvProviderNoAccessKeyID(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_SECRET_ACCESS_KEY", "secret")
e := EnvProvider{}
creds, err := e.Retrieve()
assert.Equal(t, ErrAccessKeyIDNotFound, err, "ErrAccessKeyIDNotFound expected, but was %#v error: %#v", creds, err)
}
func TestEnvProviderNoSecretAccessKey(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_ACCESS_KEY_ID", "access")
e := EnvProvider{}
creds, err := e.Retrieve()
assert.Equal(t, ErrSecretAccessKeyNotFound, err, "ErrSecretAccessKeyNotFound expected, but was %#v error: %#v", creds, err)
}
func TestEnvProviderAlternateNames(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_ACCESS_KEY", "access")
os.Setenv("AWS_SECRET_KEY", "secret")
e := EnvProvider{}
creds, err := e.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "access", creds.AccessKeyID, "Expected access key ID")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expected secret access key")
assert.Empty(t, creds.SessionToken, "Expected no token")
}

View File

@ -1,116 +0,0 @@
package credentials
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
func TestSharedCredentialsProvider(t *testing.T) {
os.Clearenv()
p := SharedCredentialsProvider{Filename: "example.ini", Profile: ""}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "token", creds.SessionToken, "Expect session token to match")
}
func TestSharedCredentialsProviderIsExpired(t *testing.T) {
os.Clearenv()
p := SharedCredentialsProvider{Filename: "example.ini", Profile: ""}
assert.True(t, p.IsExpired(), "Expect creds to be expired before retrieve")
_, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.False(t, p.IsExpired(), "Expect creds to not be expired after retrieve")
}
func TestSharedCredentialsProviderWithAWS_SHARED_CREDENTIALS_FILE(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", "example.ini")
p := SharedCredentialsProvider{}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "token", creds.SessionToken, "Expect session token to match")
}
func TestSharedCredentialsProviderWithAWS_SHARED_CREDENTIALS_FILEAbsPath(t *testing.T) {
os.Clearenv()
wd, err := os.Getwd()
assert.NoError(t, err)
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", filepath.Join(wd, "example.ini"))
p := SharedCredentialsProvider{}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "token", creds.SessionToken, "Expect session token to match")
}
func TestSharedCredentialsProviderWithAWS_PROFILE(t *testing.T) {
os.Clearenv()
os.Setenv("AWS_PROFILE", "no_token")
p := SharedCredentialsProvider{Filename: "example.ini", Profile: ""}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect no token")
}
func TestSharedCredentialsProviderWithoutTokenFromProfile(t *testing.T) {
os.Clearenv()
p := SharedCredentialsProvider{Filename: "example.ini", Profile: "no_token"}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect no token")
}
func TestSharedCredentialsProviderColonInCredFile(t *testing.T) {
os.Clearenv()
p := SharedCredentialsProvider{Filename: "example.ini", Profile: "with_colon"}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "accessKey", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "secret", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect no token")
}
func BenchmarkSharedCredentialsProvider(b *testing.B) {
os.Clearenv()
p := SharedCredentialsProvider{Filename: "example.ini", Profile: ""}
_, err := p.Retrieve()
if err != nil {
b.Fatal(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := p.Retrieve()
if err != nil {
b.Fatal(err)
}
}
}

View File

@ -1,34 +0,0 @@
package credentials
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestStaticProviderGet(t *testing.T) {
s := StaticProvider{
Value: Value{
AccessKeyID: "AKID",
SecretAccessKey: "SECRET",
SessionToken: "",
},
}
creds, err := s.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "AKID", creds.AccessKeyID, "Expect access key ID to match")
assert.Equal(t, "SECRET", creds.SecretAccessKey, "Expect secret access key to match")
assert.Empty(t, creds.SessionToken, "Expect no session token")
}
func TestStaticProviderIsExpired(t *testing.T) {
s := StaticProvider{
Value: Value{
AccessKeyID: "AKID",
SecretAccessKey: "SECRET",
SessionToken: "",
},
}
assert.False(t, s.IsExpired(), "Expect static credentials to never expire")
}

View File

@ -1,56 +0,0 @@
package stscreds
import (
"testing"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/sts"
"github.com/stretchr/testify/assert"
)
type stubSTS struct {
}
func (s *stubSTS) AssumeRole(input *sts.AssumeRoleInput) (*sts.AssumeRoleOutput, error) {
expiry := time.Now().Add(60 * time.Minute)
return &sts.AssumeRoleOutput{
Credentials: &sts.Credentials{
// Just reflect the role arn to the provider.
AccessKeyId: input.RoleArn,
SecretAccessKey: aws.String("assumedSecretAccessKey"),
SessionToken: aws.String("assumedSessionToken"),
Expiration: &expiry,
},
}, nil
}
func TestAssumeRoleProvider(t *testing.T) {
stub := &stubSTS{}
p := &AssumeRoleProvider{
Client: stub,
RoleARN: "roleARN",
}
creds, err := p.Retrieve()
assert.Nil(t, err, "Expect no error")
assert.Equal(t, "roleARN", creds.AccessKeyID, "Expect access key ID to be reflected role ARN")
assert.Equal(t, "assumedSecretAccessKey", creds.SecretAccessKey, "Expect secret access key to match")
assert.Equal(t, "assumedSessionToken", creds.SessionToken, "Expect session token to match")
}
func BenchmarkAssumeRoleProvider(b *testing.B) {
stub := &stubSTS{}
p := &AssumeRoleProvider{
Client: stub,
RoleARN: "roleARN",
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := p.Retrieve(); err != nil {
b.Fatal(err)
}
}
}

View File

@ -1,39 +0,0 @@
package defaults
import (
"fmt"
"os"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/credentials/endpointcreds"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/stretchr/testify/assert"
)
func TestECSCredProvider(t *testing.T) {
defer os.Clearenv()
os.Setenv("AWS_CONTAINER_CREDENTIALS_RELATIVE_URI", "/abc/123")
provider := RemoteCredProvider(aws.Config{}, request.Handlers{})
assert.NotNil(t, provider)
ecsProvider, ok := provider.(*endpointcreds.Provider)
assert.NotNil(t, ecsProvider)
assert.True(t, ok)
assert.Equal(t, fmt.Sprintf("http://169.254.170.2/abc/123"),
ecsProvider.Client.Endpoint)
}
func TestDefaultEC2RoleProvider(t *testing.T) {
provider := RemoteCredProvider(aws.Config{}, request.Handlers{})
assert.NotNil(t, provider)
ec2Provider, ok := provider.(*ec2rolecreds.EC2RoleProvider)
assert.NotNil(t, ec2Provider)
assert.True(t, ok)
}

View File

@ -1,195 +0,0 @@
package ec2metadata_test
import (
"bytes"
"io/ioutil"
"net/http"
"net/http/httptest"
"path"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting/unit"
)
const instanceIdentityDocument = `{
"devpayProductCodes" : null,
"availabilityZone" : "us-east-1d",
"privateIp" : "10.158.112.84",
"version" : "2010-08-31",
"region" : "us-east-1",
"instanceId" : "i-1234567890abcdef0",
"billingProducts" : null,
"instanceType" : "t1.micro",
"accountId" : "123456789012",
"pendingTime" : "2015-11-19T16:32:11Z",
"imageId" : "ami-5fb8c835",
"kernelId" : "aki-919dcaf8",
"ramdiskId" : null,
"architecture" : "x86_64"
}`
const validIamInfo = `{
"Code" : "Success",
"LastUpdated" : "2016-03-17T12:27:32Z",
"InstanceProfileArn" : "arn:aws:iam::123456789012:instance-profile/my-instance-profile",
"InstanceProfileId" : "AIPAABCDEFGHIJKLMN123"
}`
const unsuccessfulIamInfo = `{
"Code" : "Failed",
"LastUpdated" : "2016-03-17T12:27:32Z",
"InstanceProfileArn" : "arn:aws:iam::123456789012:instance-profile/my-instance-profile",
"InstanceProfileId" : "AIPAABCDEFGHIJKLMN123"
}`
func initTestServer(path string, resp string) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.RequestURI != path {
http.Error(w, "not found", http.StatusNotFound)
return
}
w.Write([]byte(resp))
}))
}
func TestEndpoint(t *testing.T) {
c := ec2metadata.New(unit.Session)
op := &request.Operation{
Name: "GetMetadata",
HTTPMethod: "GET",
HTTPPath: path.Join("/", "meta-data", "testpath"),
}
req := c.NewRequest(op, nil, nil)
assert.Equal(t, "http://169.254.169.254/latest", req.ClientInfo.Endpoint)
assert.Equal(t, "http://169.254.169.254/latest/meta-data/testpath", req.HTTPRequest.URL.String())
}
func TestGetMetadata(t *testing.T) {
server := initTestServer(
"/latest/meta-data/some/path",
"success", // real response includes suffix
)
defer server.Close()
c := ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")})
resp, err := c.GetMetadata("some/path")
assert.NoError(t, err)
assert.Equal(t, "success", resp)
}
func TestGetRegion(t *testing.T) {
server := initTestServer(
"/latest/meta-data/placement/availability-zone",
"us-west-2a", // real response includes suffix
)
defer server.Close()
c := ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")})
region, err := c.Region()
assert.NoError(t, err)
assert.Equal(t, "us-west-2", region)
}
func TestMetadataAvailable(t *testing.T) {
server := initTestServer(
"/latest/meta-data/instance-id",
"instance-id",
)
defer server.Close()
c := ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")})
available := c.Available()
assert.True(t, available)
}
func TestMetadataIAMInfo_success(t *testing.T) {
server := initTestServer(
"/latest/meta-data/iam/info",
validIamInfo,
)
defer server.Close()
c := ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")})
iamInfo, err := c.IAMInfo()
assert.NoError(t, err)
assert.Equal(t, "Success", iamInfo.Code)
assert.Equal(t, "arn:aws:iam::123456789012:instance-profile/my-instance-profile", iamInfo.InstanceProfileArn)
assert.Equal(t, "AIPAABCDEFGHIJKLMN123", iamInfo.InstanceProfileID)
}
func TestMetadataIAMInfo_failure(t *testing.T) {
server := initTestServer(
"/latest/meta-data/iam/info",
unsuccessfulIamInfo,
)
defer server.Close()
c := ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")})
iamInfo, err := c.IAMInfo()
assert.NotNil(t, err)
assert.Equal(t, "", iamInfo.Code)
assert.Equal(t, "", iamInfo.InstanceProfileArn)
assert.Equal(t, "", iamInfo.InstanceProfileID)
}
func TestMetadataNotAvailable(t *testing.T) {
c := ec2metadata.New(unit.Session)
c.Handlers.Send.Clear()
c.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &http.Response{
StatusCode: int(0),
Status: http.StatusText(int(0)),
Body: ioutil.NopCloser(bytes.NewReader([]byte{})),
}
r.Error = awserr.New("RequestError", "send request failed", nil)
r.Retryable = aws.Bool(true) // network errors are retryable
})
available := c.Available()
assert.False(t, available)
}
func TestMetadataErrorResponse(t *testing.T) {
c := ec2metadata.New(unit.Session)
c.Handlers.Send.Clear()
c.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &http.Response{
StatusCode: http.StatusBadRequest,
Status: http.StatusText(http.StatusBadRequest),
Body: ioutil.NopCloser(strings.NewReader("error message text")),
}
r.Retryable = aws.Bool(false) // network errors are retryable
})
data, err := c.GetMetadata("uri/path")
assert.Empty(t, data)
assert.Contains(t, err.Error(), "error message text")
}
func TestEC2RoleProviderInstanceIdentity(t *testing.T) {
server := initTestServer(
"/latest/dynamic/instance-identity/document",
instanceIdentityDocument,
)
defer server.Close()
c := ec2metadata.New(unit.Session, &aws.Config{Endpoint: aws.String(server.URL + "/latest")})
doc, err := c.GetInstanceIdentityDocument()
assert.Nil(t, err, "Expect no error, %v", err)
assert.Equal(t, doc.AccountID, "123456789012")
assert.Equal(t, doc.AvailabilityZone, "us-east-1d")
assert.Equal(t, doc.Region, "us-east-1")
}

View File

@ -1,78 +0,0 @@
package ec2metadata_test
import (
"net/http"
"net/http/httptest"
"sync"
"testing"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/awstesting/unit"
"github.com/stretchr/testify/assert"
)
func TestClientOverrideDefaultHTTPClientTimeout(t *testing.T) {
svc := ec2metadata.New(unit.Session)
assert.NotEqual(t, http.DefaultClient, svc.Config.HTTPClient)
assert.Equal(t, 5*time.Second, svc.Config.HTTPClient.Timeout)
}
func TestClientNotOverrideDefaultHTTPClientTimeout(t *testing.T) {
http.DefaultClient.Transport = &http.Transport{}
defer func() {
http.DefaultClient.Transport = nil
}()
svc := ec2metadata.New(unit.Session)
assert.Equal(t, http.DefaultClient, svc.Config.HTTPClient)
tr, ok := svc.Config.HTTPClient.Transport.(*http.Transport)
assert.True(t, ok)
assert.NotNil(t, tr)
assert.Nil(t, tr.Dial)
}
func TestClientDisableOverrideDefaultHTTPClientTimeout(t *testing.T) {
svc := ec2metadata.New(unit.Session, aws.NewConfig().WithEC2MetadataDisableTimeoutOverride(true))
assert.Equal(t, http.DefaultClient, svc.Config.HTTPClient)
}
func TestClientOverrideDefaultHTTPClientTimeoutRace(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("us-east-1a"))
}))
cfg := aws.NewConfig().WithEndpoint(server.URL)
runEC2MetadataClients(t, cfg, 100)
}
func TestClientOverrideDefaultHTTPClientTimeoutRaceWithTransport(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("us-east-1a"))
}))
cfg := aws.NewConfig().WithEndpoint(server.URL).WithHTTPClient(&http.Client{
Transport: http.DefaultTransport,
})
runEC2MetadataClients(t, cfg, 100)
}
func runEC2MetadataClients(t *testing.T, cfg *aws.Config, atOnce int) {
var wg sync.WaitGroup
wg.Add(atOnce)
for i := 0; i < atOnce; i++ {
go func() {
svc := ec2metadata.New(unit.Session, cfg)
_, err := svc.Region()
assert.NoError(t, err)
wg.Done()
}()
}
wg.Wait()
}

View File

@ -1,87 +0,0 @@
package request_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
)
func TestHandlerList(t *testing.T) {
s := ""
r := &request.Request{}
l := request.HandlerList{}
l.PushBack(func(r *request.Request) {
s += "a"
r.Data = s
})
l.Run(r)
assert.Equal(t, "a", s)
assert.Equal(t, "a", r.Data)
}
func TestMultipleHandlers(t *testing.T) {
r := &request.Request{}
l := request.HandlerList{}
l.PushBack(func(r *request.Request) { r.Data = nil })
l.PushFront(func(r *request.Request) { r.Data = aws.Bool(true) })
l.Run(r)
if r.Data != nil {
t.Error("Expected handler to execute")
}
}
func TestNamedHandlers(t *testing.T) {
l := request.HandlerList{}
named := request.NamedHandler{Name: "Name", Fn: func(r *request.Request) {}}
named2 := request.NamedHandler{Name: "NotName", Fn: func(r *request.Request) {}}
l.PushBackNamed(named)
l.PushBackNamed(named)
l.PushBackNamed(named2)
l.PushBack(func(r *request.Request) {})
assert.Equal(t, 4, l.Len())
l.Remove(named)
assert.Equal(t, 2, l.Len())
}
func TestLoggedHandlers(t *testing.T) {
expectedHandlers := []string{"name1", "name2"}
l := request.HandlerList{}
loggedHandlers := []string{}
l.AfterEachFn = request.HandlerListLogItem
cfg := aws.Config{Logger: aws.LoggerFunc(func(args ...interface{}) {
loggedHandlers = append(loggedHandlers, args[2].(string))
})}
named1 := request.NamedHandler{Name: "name1", Fn: func(r *request.Request) {}}
named2 := request.NamedHandler{Name: "name2", Fn: func(r *request.Request) {}}
l.PushBackNamed(named1)
l.PushBackNamed(named2)
l.Run(&request.Request{Config: cfg})
assert.Equal(t, expectedHandlers, loggedHandlers)
}
func TestStopHandlers(t *testing.T) {
l := request.HandlerList{}
stopAt := 1
l.AfterEachFn = func(item request.HandlerListRunItem) bool {
return item.Index != stopAt
}
called := 0
l.PushBackNamed(request.NamedHandler{Name: "name1", Fn: func(r *request.Request) {
called++
}})
l.PushBackNamed(request.NamedHandler{Name: "name2", Fn: func(r *request.Request) {
called++
}})
l.PushBackNamed(request.NamedHandler{Name: "name3", Fn: func(r *request.Request) {
assert.Fail(t, "third handler should not be called")
}})
l.Run(&request.Request{})
assert.Equal(t, 2, called, "Expect only two handlers to be called")
}

View File

@ -1,33 +0,0 @@
// +build go1.5
package request
import (
"io"
"net/http"
"net/url"
)
func copyHTTPRequest(r *http.Request, body io.ReadCloser) *http.Request {
req := &http.Request{
URL: &url.URL{},
Header: http.Header{},
Close: r.Close,
Body: body,
Host: r.Host,
Method: r.Method,
Proto: r.Proto,
ContentLength: r.ContentLength,
// Cancel will be deprecated in 1.7 and will be replaced with Context
Cancel: r.Cancel,
}
*req.URL = *r.URL
for k, v := range r.Header {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
return req
}

View File

@ -1,31 +0,0 @@
// +build !go1.5
package request
import (
"io"
"net/http"
"net/url"
)
func copyHTTPRequest(r *http.Request, body io.ReadCloser) *http.Request {
req := &http.Request{
URL: &url.URL{},
Header: http.Header{},
Close: r.Close,
Body: body,
Host: r.Host,
Method: r.Method,
Proto: r.Proto,
ContentLength: r.ContentLength,
}
*req.URL = *r.URL
for k, v := range r.Header {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
return req
}

View File

@ -1,34 +0,0 @@
package request
import (
"bytes"
"io/ioutil"
"net/http"
"net/url"
"sync"
"testing"
)
func TestRequestCopyRace(t *testing.T) {
origReq := &http.Request{URL: &url.URL{}, Header: http.Header{}}
origReq.Header.Set("Header", "OrigValue")
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
req := copyHTTPRequest(origReq, ioutil.NopCloser(&bytes.Buffer{}))
req.Header.Set("Header", "Value")
go func() {
req2 := copyHTTPRequest(req, ioutil.NopCloser(&bytes.Buffer{}))
req2.Header.Add("Header", "Value2")
}()
_ = req.Header.Get("Header")
wg.Done()
}()
_ = origReq.Header.Get("Header")
}
origReq.Header.Get("Header")
wg.Wait()
}

View File

@ -1,37 +0,0 @@
// +build go1.5
package request_test
import (
"errors"
"strings"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting/mock"
"github.com/stretchr/testify/assert"
)
func TestRequestCancelRetry(t *testing.T) {
c := make(chan struct{})
reqNum := 0
s := mock.NewMockClient(aws.NewConfig().WithMaxRetries(10))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.Clear()
s.Handlers.UnmarshalMeta.Clear()
s.Handlers.UnmarshalError.Clear()
s.Handlers.Send.PushFront(func(r *request.Request) {
reqNum++
r.Error = errors.New("net/http: canceled")
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
r.HTTPRequest.Cancel = c
close(c)
err := r.Send()
assert.True(t, strings.Contains(err.Error(), "canceled"))
assert.Equal(t, 1, reqNum)
}

View File

@ -1,122 +0,0 @@
package request
import (
"bytes"
"io"
"math/rand"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestOffsetReaderRead(t *testing.T) {
buf := []byte("testData")
reader := &offsetReader{buf: bytes.NewReader(buf)}
tempBuf := make([]byte, len(buf))
n, err := reader.Read(tempBuf)
assert.Equal(t, n, len(buf))
assert.Nil(t, err)
assert.Equal(t, buf, tempBuf)
}
func TestOffsetReaderClose(t *testing.T) {
buf := []byte("testData")
reader := &offsetReader{buf: bytes.NewReader(buf)}
err := reader.Close()
assert.Nil(t, err)
tempBuf := make([]byte, len(buf))
n, err := reader.Read(tempBuf)
assert.Equal(t, n, 0)
assert.Equal(t, err, io.EOF)
}
func TestOffsetReaderCloseAndCopy(t *testing.T) {
buf := []byte("testData")
tempBuf := make([]byte, len(buf))
reader := &offsetReader{buf: bytes.NewReader(buf)}
newReader := reader.CloseAndCopy(0)
n, err := reader.Read(tempBuf)
assert.Equal(t, n, 0)
assert.Equal(t, err, io.EOF)
n, err = newReader.Read(tempBuf)
assert.Equal(t, n, len(buf))
assert.Nil(t, err)
assert.Equal(t, buf, tempBuf)
}
func TestOffsetReaderCloseAndCopyOffset(t *testing.T) {
buf := []byte("testData")
tempBuf := make([]byte, len(buf))
reader := &offsetReader{buf: bytes.NewReader(buf)}
newReader := reader.CloseAndCopy(4)
n, err := newReader.Read(tempBuf)
assert.Equal(t, n, len(buf)-4)
assert.Nil(t, err)
expected := []byte{'D', 'a', 't', 'a', 0, 0, 0, 0}
assert.Equal(t, expected, tempBuf)
}
func TestOffsetReaderRace(t *testing.T) {
wg := sync.WaitGroup{}
f := func(reader *offsetReader) {
defer wg.Done()
var err error
buf := make([]byte, 1)
_, err = reader.Read(buf)
for err != io.EOF {
_, err = reader.Read(buf)
}
}
closeFn := func(reader *offsetReader) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(20)+1) * time.Millisecond)
reader.Close()
}
for i := 0; i < 50; i++ {
reader := &offsetReader{buf: bytes.NewReader(make([]byte, 1024*1024))}
wg.Add(1)
go f(reader)
wg.Add(1)
go closeFn(reader)
}
wg.Wait()
}
func BenchmarkOffsetReader(b *testing.B) {
bufSize := 1024 * 1024 * 100
buf := make([]byte, bufSize)
reader := &offsetReader{buf: bytes.NewReader(buf)}
tempBuf := make([]byte, 1024)
for i := 0; i < b.N; i++ {
reader.Read(tempBuf)
}
}
func BenchmarkBytesReader(b *testing.B) {
bufSize := 1024 * 1024 * 100
buf := make([]byte, bufSize)
reader := bytes.NewReader(buf)
tempBuf := make([]byte, 1024)
for i := 0; i < b.N; i++ {
reader.Read(tempBuf)
}
}

View File

@ -1,33 +0,0 @@
// +build go1.6
package request_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/client"
"github.com/aws/aws-sdk-go/aws/client/metadata"
"github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/private/endpoints"
)
// go version 1.4 and 1.5 do not return an error. Version 1.5 will url encode
// the uri while 1.4 will not
func TestRequestInvalidEndpoint(t *testing.T) {
endpoint, _ := endpoints.NormalizeEndpoint("localhost:80 ", "test-service", "test-region", false, false)
r := request.New(
aws.Config{},
metadata.ClientInfo{Endpoint: endpoint},
defaults.Handlers(),
client.DefaultRetryer{},
&request.Operation{},
nil,
nil,
)
assert.Error(t, r.Error)
}

View File

@ -1,455 +0,0 @@
package request_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting/unit"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/route53"
"github.com/aws/aws-sdk-go/service/s3"
)
// Use DynamoDB methods for simplicity
func TestPaginationQueryPage(t *testing.T) {
db := dynamodb.New(unit.Session)
tokens, pages, numPages, gotToEnd := []map[string]*dynamodb.AttributeValue{}, []map[string]*dynamodb.AttributeValue{}, 0, false
reqNum := 0
resps := []*dynamodb.QueryOutput{
{
LastEvaluatedKey: map[string]*dynamodb.AttributeValue{"key": {S: aws.String("key1")}},
Count: aws.Int64(1),
Items: []map[string]*dynamodb.AttributeValue{
{
"key": {S: aws.String("key1")},
},
},
},
{
LastEvaluatedKey: map[string]*dynamodb.AttributeValue{"key": {S: aws.String("key2")}},
Count: aws.Int64(1),
Items: []map[string]*dynamodb.AttributeValue{
{
"key": {S: aws.String("key2")},
},
},
},
{
LastEvaluatedKey: map[string]*dynamodb.AttributeValue{},
Count: aws.Int64(1),
Items: []map[string]*dynamodb.AttributeValue{
{
"key": {S: aws.String("key3")},
},
},
},
}
db.Handlers.Send.Clear() // mock sending
db.Handlers.Unmarshal.Clear()
db.Handlers.UnmarshalMeta.Clear()
db.Handlers.ValidateResponse.Clear()
db.Handlers.Build.PushBack(func(r *request.Request) {
in := r.Params.(*dynamodb.QueryInput)
if in == nil {
tokens = append(tokens, nil)
} else if len(in.ExclusiveStartKey) != 0 {
tokens = append(tokens, in.ExclusiveStartKey)
}
})
db.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = resps[reqNum]
reqNum++
})
params := &dynamodb.QueryInput{
Limit: aws.Int64(2),
TableName: aws.String("tablename"),
}
err := db.QueryPages(params, func(p *dynamodb.QueryOutput, last bool) bool {
numPages++
for _, item := range p.Items {
pages = append(pages, item)
}
if last {
if gotToEnd {
assert.Fail(t, "last=true happened twice")
}
gotToEnd = true
}
return true
})
assert.Nil(t, err)
assert.Equal(t,
[]map[string]*dynamodb.AttributeValue{
{"key": {S: aws.String("key1")}},
{"key": {S: aws.String("key2")}},
}, tokens)
assert.Equal(t,
[]map[string]*dynamodb.AttributeValue{
{"key": {S: aws.String("key1")}},
{"key": {S: aws.String("key2")}},
{"key": {S: aws.String("key3")}},
}, pages)
assert.Equal(t, 3, numPages)
assert.True(t, gotToEnd)
assert.Nil(t, params.ExclusiveStartKey)
}
// Use DynamoDB methods for simplicity
func TestPagination(t *testing.T) {
db := dynamodb.New(unit.Session)
tokens, pages, numPages, gotToEnd := []string{}, []string{}, 0, false
reqNum := 0
resps := []*dynamodb.ListTablesOutput{
{TableNames: []*string{aws.String("Table1"), aws.String("Table2")}, LastEvaluatedTableName: aws.String("Table2")},
{TableNames: []*string{aws.String("Table3"), aws.String("Table4")}, LastEvaluatedTableName: aws.String("Table4")},
{TableNames: []*string{aws.String("Table5")}},
}
db.Handlers.Send.Clear() // mock sending
db.Handlers.Unmarshal.Clear()
db.Handlers.UnmarshalMeta.Clear()
db.Handlers.ValidateResponse.Clear()
db.Handlers.Build.PushBack(func(r *request.Request) {
in := r.Params.(*dynamodb.ListTablesInput)
if in == nil {
tokens = append(tokens, "")
} else if in.ExclusiveStartTableName != nil {
tokens = append(tokens, *in.ExclusiveStartTableName)
}
})
db.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = resps[reqNum]
reqNum++
})
params := &dynamodb.ListTablesInput{Limit: aws.Int64(2)}
err := db.ListTablesPages(params, func(p *dynamodb.ListTablesOutput, last bool) bool {
numPages++
for _, t := range p.TableNames {
pages = append(pages, *t)
}
if last {
if gotToEnd {
assert.Fail(t, "last=true happened twice")
}
gotToEnd = true
}
return true
})
assert.Equal(t, []string{"Table2", "Table4"}, tokens)
assert.Equal(t, []string{"Table1", "Table2", "Table3", "Table4", "Table5"}, pages)
assert.Equal(t, 3, numPages)
assert.True(t, gotToEnd)
assert.Nil(t, err)
assert.Nil(t, params.ExclusiveStartTableName)
}
// Use DynamoDB methods for simplicity
func TestPaginationEachPage(t *testing.T) {
db := dynamodb.New(unit.Session)
tokens, pages, numPages, gotToEnd := []string{}, []string{}, 0, false
reqNum := 0
resps := []*dynamodb.ListTablesOutput{
{TableNames: []*string{aws.String("Table1"), aws.String("Table2")}, LastEvaluatedTableName: aws.String("Table2")},
{TableNames: []*string{aws.String("Table3"), aws.String("Table4")}, LastEvaluatedTableName: aws.String("Table4")},
{TableNames: []*string{aws.String("Table5")}},
}
db.Handlers.Send.Clear() // mock sending
db.Handlers.Unmarshal.Clear()
db.Handlers.UnmarshalMeta.Clear()
db.Handlers.ValidateResponse.Clear()
db.Handlers.Build.PushBack(func(r *request.Request) {
in := r.Params.(*dynamodb.ListTablesInput)
if in == nil {
tokens = append(tokens, "")
} else if in.ExclusiveStartTableName != nil {
tokens = append(tokens, *in.ExclusiveStartTableName)
}
})
db.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = resps[reqNum]
reqNum++
})
params := &dynamodb.ListTablesInput{Limit: aws.Int64(2)}
req, _ := db.ListTablesRequest(params)
err := req.EachPage(func(p interface{}, last bool) bool {
numPages++
for _, t := range p.(*dynamodb.ListTablesOutput).TableNames {
pages = append(pages, *t)
}
if last {
if gotToEnd {
assert.Fail(t, "last=true happened twice")
}
gotToEnd = true
}
return true
})
assert.Equal(t, []string{"Table2", "Table4"}, tokens)
assert.Equal(t, []string{"Table1", "Table2", "Table3", "Table4", "Table5"}, pages)
assert.Equal(t, 3, numPages)
assert.True(t, gotToEnd)
assert.Nil(t, err)
}
// Use DynamoDB methods for simplicity
func TestPaginationEarlyExit(t *testing.T) {
db := dynamodb.New(unit.Session)
numPages, gotToEnd := 0, false
reqNum := 0
resps := []*dynamodb.ListTablesOutput{
{TableNames: []*string{aws.String("Table1"), aws.String("Table2")}, LastEvaluatedTableName: aws.String("Table2")},
{TableNames: []*string{aws.String("Table3"), aws.String("Table4")}, LastEvaluatedTableName: aws.String("Table4")},
{TableNames: []*string{aws.String("Table5")}},
}
db.Handlers.Send.Clear() // mock sending
db.Handlers.Unmarshal.Clear()
db.Handlers.UnmarshalMeta.Clear()
db.Handlers.ValidateResponse.Clear()
db.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = resps[reqNum]
reqNum++
})
params := &dynamodb.ListTablesInput{Limit: aws.Int64(2)}
err := db.ListTablesPages(params, func(p *dynamodb.ListTablesOutput, last bool) bool {
numPages++
if numPages == 2 {
return false
}
if last {
if gotToEnd {
assert.Fail(t, "last=true happened twice")
}
gotToEnd = true
}
return true
})
assert.Equal(t, 2, numPages)
assert.False(t, gotToEnd)
assert.Nil(t, err)
}
func TestSkipPagination(t *testing.T) {
client := s3.New(unit.Session)
client.Handlers.Send.Clear() // mock sending
client.Handlers.Unmarshal.Clear()
client.Handlers.UnmarshalMeta.Clear()
client.Handlers.ValidateResponse.Clear()
client.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = &s3.HeadBucketOutput{}
})
req, _ := client.HeadBucketRequest(&s3.HeadBucketInput{Bucket: aws.String("bucket")})
numPages, gotToEnd := 0, false
req.EachPage(func(p interface{}, last bool) bool {
numPages++
if last {
gotToEnd = true
}
return true
})
assert.Equal(t, 1, numPages)
assert.True(t, gotToEnd)
}
// Use S3 for simplicity
func TestPaginationTruncation(t *testing.T) {
client := s3.New(unit.Session)
reqNum := 0
resps := []*s3.ListObjectsOutput{
{IsTruncated: aws.Bool(true), Contents: []*s3.Object{{Key: aws.String("Key1")}}},
{IsTruncated: aws.Bool(true), Contents: []*s3.Object{{Key: aws.String("Key2")}}},
{IsTruncated: aws.Bool(false), Contents: []*s3.Object{{Key: aws.String("Key3")}}},
{IsTruncated: aws.Bool(true), Contents: []*s3.Object{{Key: aws.String("Key4")}}},
}
client.Handlers.Send.Clear() // mock sending
client.Handlers.Unmarshal.Clear()
client.Handlers.UnmarshalMeta.Clear()
client.Handlers.ValidateResponse.Clear()
client.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = resps[reqNum]
reqNum++
})
params := &s3.ListObjectsInput{Bucket: aws.String("bucket")}
results := []string{}
err := client.ListObjectsPages(params, func(p *s3.ListObjectsOutput, last bool) bool {
results = append(results, *p.Contents[0].Key)
return true
})
assert.Equal(t, []string{"Key1", "Key2", "Key3"}, results)
assert.Nil(t, err)
// Try again without truncation token at all
reqNum = 0
resps[1].IsTruncated = nil
resps[2].IsTruncated = aws.Bool(true)
results = []string{}
err = client.ListObjectsPages(params, func(p *s3.ListObjectsOutput, last bool) bool {
results = append(results, *p.Contents[0].Key)
return true
})
assert.Equal(t, []string{"Key1", "Key2"}, results)
assert.Nil(t, err)
}
func TestPaginationNilToken(t *testing.T) {
client := route53.New(unit.Session)
reqNum := 0
resps := []*route53.ListResourceRecordSetsOutput{
{
ResourceRecordSets: []*route53.ResourceRecordSet{
{Name: aws.String("first.example.com.")},
},
IsTruncated: aws.Bool(true),
NextRecordName: aws.String("second.example.com."),
NextRecordType: aws.String("MX"),
NextRecordIdentifier: aws.String("second"),
MaxItems: aws.String("1"),
},
{
ResourceRecordSets: []*route53.ResourceRecordSet{
{Name: aws.String("second.example.com.")},
},
IsTruncated: aws.Bool(true),
NextRecordName: aws.String("third.example.com."),
NextRecordType: aws.String("MX"),
MaxItems: aws.String("1"),
},
{
ResourceRecordSets: []*route53.ResourceRecordSet{
{Name: aws.String("third.example.com.")},
},
IsTruncated: aws.Bool(false),
MaxItems: aws.String("1"),
},
}
client.Handlers.Send.Clear() // mock sending
client.Handlers.Unmarshal.Clear()
client.Handlers.UnmarshalMeta.Clear()
client.Handlers.ValidateResponse.Clear()
idents := []string{}
client.Handlers.Build.PushBack(func(r *request.Request) {
p := r.Params.(*route53.ListResourceRecordSetsInput)
idents = append(idents, aws.StringValue(p.StartRecordIdentifier))
})
client.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = resps[reqNum]
reqNum++
})
params := &route53.ListResourceRecordSetsInput{
HostedZoneId: aws.String("id-zone"),
}
results := []string{}
err := client.ListResourceRecordSetsPages(params, func(p *route53.ListResourceRecordSetsOutput, last bool) bool {
results = append(results, *p.ResourceRecordSets[0].Name)
return true
})
assert.NoError(t, err)
assert.Equal(t, []string{"", "second", ""}, idents)
assert.Equal(t, []string{"first.example.com.", "second.example.com.", "third.example.com."}, results)
}
// Benchmarks
var benchResps = []*dynamodb.ListTablesOutput{
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE"), aws.String("NXT")}, LastEvaluatedTableName: aws.String("NXT")},
{TableNames: []*string{aws.String("TABLE")}},
}
var benchDb = func() *dynamodb.DynamoDB {
db := dynamodb.New(unit.Session)
db.Handlers.Send.Clear() // mock sending
db.Handlers.Unmarshal.Clear()
db.Handlers.UnmarshalMeta.Clear()
db.Handlers.ValidateResponse.Clear()
return db
}
func BenchmarkCodegenIterator(b *testing.B) {
reqNum := 0
db := benchDb()
db.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = benchResps[reqNum]
reqNum++
})
input := &dynamodb.ListTablesInput{Limit: aws.Int64(2)}
iter := func(fn func(*dynamodb.ListTablesOutput, bool) bool) error {
page, _ := db.ListTablesRequest(input)
for ; page != nil; page = page.NextPage() {
page.Send()
out := page.Data.(*dynamodb.ListTablesOutput)
if result := fn(out, !page.HasNextPage()); page.Error != nil || !result {
return page.Error
}
}
return nil
}
for i := 0; i < b.N; i++ {
reqNum = 0
iter(func(p *dynamodb.ListTablesOutput, last bool) bool {
return true
})
}
}
func BenchmarkEachPageIterator(b *testing.B) {
reqNum := 0
db := benchDb()
db.Handlers.Unmarshal.PushBack(func(r *request.Request) {
r.Data = benchResps[reqNum]
reqNum++
})
input := &dynamodb.ListTablesInput{Limit: aws.Int64(2)}
for i := 0; i < b.N; i++ {
reqNum = 0
req, _ := db.ListTablesRequest(input)
req.EachPage(func(p interface{}, last bool) bool {
return true
})
}
}

View File

@ -1,380 +0,0 @@
package request_test
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting"
)
type testData struct {
Data string
}
func body(str string) io.ReadCloser {
return ioutil.NopCloser(bytes.NewReader([]byte(str)))
}
func unmarshal(req *request.Request) {
defer req.HTTPResponse.Body.Close()
if req.Data != nil {
json.NewDecoder(req.HTTPResponse.Body).Decode(req.Data)
}
return
}
func unmarshalError(req *request.Request) {
bodyBytes, err := ioutil.ReadAll(req.HTTPResponse.Body)
if err != nil {
req.Error = awserr.New("UnmarshaleError", req.HTTPResponse.Status, err)
return
}
if len(bodyBytes) == 0 {
req.Error = awserr.NewRequestFailure(
awserr.New("UnmarshaleError", req.HTTPResponse.Status, fmt.Errorf("empty body")),
req.HTTPResponse.StatusCode,
"",
)
return
}
var jsonErr jsonErrorResponse
if err := json.Unmarshal(bodyBytes, &jsonErr); err != nil {
req.Error = awserr.New("UnmarshaleError", "JSON unmarshal", err)
return
}
req.Error = awserr.NewRequestFailure(
awserr.New(jsonErr.Code, jsonErr.Message, nil),
req.HTTPResponse.StatusCode,
"",
)
}
type jsonErrorResponse struct {
Code string `json:"__type"`
Message string `json:"message"`
}
// test that retries occur for 5xx status codes
func TestRequestRecoverRetry5xx(t *testing.T) {
reqNum := 0
reqs := []http.Response{
{StatusCode: 500, Body: body(`{"__type":"UnknownError","message":"An error occurred."}`)},
{StatusCode: 501, Body: body(`{"__type":"UnknownError","message":"An error occurred."}`)},
{StatusCode: 200, Body: body(`{"data":"valid"}`)},
}
s := awstesting.NewClient(aws.NewConfig().WithMaxRetries(10))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &reqs[reqNum]
reqNum++
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
err := r.Send()
assert.Nil(t, err)
assert.Equal(t, 2, int(r.RetryCount))
assert.Equal(t, "valid", out.Data)
}
// test that retries occur for 4xx status codes with a response type that can be retried - see `shouldRetry`
func TestRequestRecoverRetry4xxRetryable(t *testing.T) {
reqNum := 0
reqs := []http.Response{
{StatusCode: 400, Body: body(`{"__type":"Throttling","message":"Rate exceeded."}`)},
{StatusCode: 429, Body: body(`{"__type":"ProvisionedThroughputExceededException","message":"Rate exceeded."}`)},
{StatusCode: 200, Body: body(`{"data":"valid"}`)},
}
s := awstesting.NewClient(aws.NewConfig().WithMaxRetries(10))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &reqs[reqNum]
reqNum++
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
err := r.Send()
assert.Nil(t, err)
assert.Equal(t, 2, int(r.RetryCount))
assert.Equal(t, "valid", out.Data)
}
// test that retries don't occur for 4xx status codes with a response type that can't be retried
func TestRequest4xxUnretryable(t *testing.T) {
s := awstesting.NewClient(aws.NewConfig().WithMaxRetries(10))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &http.Response{StatusCode: 401, Body: body(`{"__type":"SignatureDoesNotMatch","message":"Signature does not match."}`)}
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
err := r.Send()
assert.NotNil(t, err)
if e, ok := err.(awserr.RequestFailure); ok {
assert.Equal(t, 401, e.StatusCode())
} else {
assert.Fail(t, "Expected error to be a service failure")
}
assert.Equal(t, "SignatureDoesNotMatch", err.(awserr.Error).Code())
assert.Equal(t, "Signature does not match.", err.(awserr.Error).Message())
assert.Equal(t, 0, int(r.RetryCount))
}
func TestRequestExhaustRetries(t *testing.T) {
delays := []time.Duration{}
sleepDelay := func(delay time.Duration) {
delays = append(delays, delay)
}
reqNum := 0
reqs := []http.Response{
{StatusCode: 500, Body: body(`{"__type":"UnknownError","message":"An error occurred."}`)},
{StatusCode: 500, Body: body(`{"__type":"UnknownError","message":"An error occurred."}`)},
{StatusCode: 500, Body: body(`{"__type":"UnknownError","message":"An error occurred."}`)},
{StatusCode: 500, Body: body(`{"__type":"UnknownError","message":"An error occurred."}`)},
}
s := awstesting.NewClient(aws.NewConfig().WithSleepDelay(sleepDelay))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &reqs[reqNum]
reqNum++
})
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, nil)
err := r.Send()
assert.NotNil(t, err)
if e, ok := err.(awserr.RequestFailure); ok {
assert.Equal(t, 500, e.StatusCode())
} else {
assert.Fail(t, "Expected error to be a service failure")
}
assert.Equal(t, "UnknownError", err.(awserr.Error).Code())
assert.Equal(t, "An error occurred.", err.(awserr.Error).Message())
assert.Equal(t, 3, int(r.RetryCount))
expectDelays := []struct{ min, max time.Duration }{{30, 59}, {60, 118}, {120, 236}}
for i, v := range delays {
min := expectDelays[i].min * time.Millisecond
max := expectDelays[i].max * time.Millisecond
assert.True(t, min <= v && v <= max,
"Expect delay to be within range, i:%d, v:%s, min:%s, max:%s", i, v, min, max)
}
}
// test that the request is retried after the credentials are expired.
func TestRequestRecoverExpiredCreds(t *testing.T) {
reqNum := 0
reqs := []http.Response{
{StatusCode: 400, Body: body(`{"__type":"ExpiredTokenException","message":"expired token"}`)},
{StatusCode: 200, Body: body(`{"data":"valid"}`)},
}
s := awstesting.NewClient(&aws.Config{MaxRetries: aws.Int(10), Credentials: credentials.NewStaticCredentials("AKID", "SECRET", "")})
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
credExpiredBeforeRetry := false
credExpiredAfterRetry := false
s.Handlers.AfterRetry.PushBack(func(r *request.Request) {
credExpiredAfterRetry = r.Config.Credentials.IsExpired()
})
s.Handlers.Sign.Clear()
s.Handlers.Sign.PushBack(func(r *request.Request) {
r.Config.Credentials.Get()
})
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &reqs[reqNum]
reqNum++
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
err := r.Send()
assert.Nil(t, err)
assert.False(t, credExpiredBeforeRetry, "Expect valid creds before retry check")
assert.True(t, credExpiredAfterRetry, "Expect expired creds after retry check")
assert.False(t, s.Config.Credentials.IsExpired(), "Expect valid creds after cred expired recovery")
assert.Equal(t, 1, int(r.RetryCount))
assert.Equal(t, "valid", out.Data)
}
func TestMakeAddtoUserAgentHandler(t *testing.T) {
fn := request.MakeAddToUserAgentHandler("name", "version", "extra1", "extra2")
r := &request.Request{HTTPRequest: &http.Request{Header: http.Header{}}}
r.HTTPRequest.Header.Set("User-Agent", "foo/bar")
fn(r)
assert.Equal(t, "foo/bar name/version (extra1; extra2)", r.HTTPRequest.Header.Get("User-Agent"))
}
func TestMakeAddtoUserAgentFreeFormHandler(t *testing.T) {
fn := request.MakeAddToUserAgentFreeFormHandler("name/version (extra1; extra2)")
r := &request.Request{HTTPRequest: &http.Request{Header: http.Header{}}}
r.HTTPRequest.Header.Set("User-Agent", "foo/bar")
fn(r)
assert.Equal(t, "foo/bar name/version (extra1; extra2)", r.HTTPRequest.Header.Get("User-Agent"))
}
func TestRequestUserAgent(t *testing.T) {
s := awstesting.NewClient(&aws.Config{Region: aws.String("us-east-1")})
// s.Handlers.Validate.Clear()
req := s.NewRequest(&request.Operation{Name: "Operation"}, nil, &testData{})
req.HTTPRequest.Header.Set("User-Agent", "foo/bar")
assert.NoError(t, req.Build())
expectUA := fmt.Sprintf("foo/bar %s/%s (%s; %s; %s)",
aws.SDKName, aws.SDKVersion, runtime.Version(), runtime.GOOS, runtime.GOARCH)
assert.Equal(t, expectUA, req.HTTPRequest.Header.Get("User-Agent"))
}
func TestRequestThrottleRetries(t *testing.T) {
delays := []time.Duration{}
sleepDelay := func(delay time.Duration) {
delays = append(delays, delay)
}
reqNum := 0
reqs := []http.Response{
{StatusCode: 500, Body: body(`{"__type":"Throttling","message":"An error occurred."}`)},
{StatusCode: 500, Body: body(`{"__type":"Throttling","message":"An error occurred."}`)},
{StatusCode: 500, Body: body(`{"__type":"Throttling","message":"An error occurred."}`)},
{StatusCode: 500, Body: body(`{"__type":"Throttling","message":"An error occurred."}`)},
}
s := awstesting.NewClient(aws.NewConfig().WithSleepDelay(sleepDelay))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &reqs[reqNum]
reqNum++
})
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, nil)
err := r.Send()
assert.NotNil(t, err)
if e, ok := err.(awserr.RequestFailure); ok {
assert.Equal(t, 500, e.StatusCode())
} else {
assert.Fail(t, "Expected error to be a service failure")
}
assert.Equal(t, "Throttling", err.(awserr.Error).Code())
assert.Equal(t, "An error occurred.", err.(awserr.Error).Message())
assert.Equal(t, 3, int(r.RetryCount))
expectDelays := []struct{ min, max time.Duration }{{500, 999}, {1000, 1998}, {2000, 3996}}
for i, v := range delays {
min := expectDelays[i].min * time.Millisecond
max := expectDelays[i].max * time.Millisecond
assert.True(t, min <= v && v <= max,
"Expect delay to be within range, i:%d, v:%s, min:%s, max:%s", i, v, min, max)
}
}
// test that retries occur for request timeouts when response.Body can be nil
func TestRequestRecoverTimeoutWithNilBody(t *testing.T) {
reqNum := 0
reqs := []*http.Response{
{StatusCode: 0, Body: nil}, // body can be nil when requests time out
{StatusCode: 200, Body: body(`{"data":"valid"}`)},
}
errors := []error{
errors.New("timeout"), nil,
}
s := awstesting.NewClient(aws.NewConfig().WithMaxRetries(10))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.AfterRetry.Clear() // force retry on all errors
s.Handlers.AfterRetry.PushBack(func(r *request.Request) {
if r.Error != nil {
r.Error = nil
r.Retryable = aws.Bool(true)
r.RetryCount++
}
})
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = reqs[reqNum]
r.Error = errors[reqNum]
reqNum++
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
err := r.Send()
assert.Nil(t, err)
assert.Equal(t, 1, int(r.RetryCount))
assert.Equal(t, "valid", out.Data)
}
func TestRequestRecoverTimeoutWithNilResponse(t *testing.T) {
reqNum := 0
reqs := []*http.Response{
nil,
{StatusCode: 200, Body: body(`{"data":"valid"}`)},
}
errors := []error{
errors.New("timeout"),
nil,
}
s := awstesting.NewClient(aws.NewConfig().WithMaxRetries(10))
s.Handlers.Validate.Clear()
s.Handlers.Unmarshal.PushBack(unmarshal)
s.Handlers.UnmarshalError.PushBack(unmarshalError)
s.Handlers.AfterRetry.Clear() // force retry on all errors
s.Handlers.AfterRetry.PushBack(func(r *request.Request) {
if r.Error != nil {
r.Error = nil
r.Retryable = aws.Bool(true)
r.RetryCount++
}
})
s.Handlers.Send.Clear() // mock sending
s.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = reqs[reqNum]
r.Error = errors[reqNum]
reqNum++
})
out := &testData{}
r := s.NewRequest(&request.Operation{Name: "Operation"}, nil, out)
err := r.Send()
assert.Nil(t, err)
assert.Equal(t, 1, int(r.RetryCount))
assert.Equal(t, "valid", out.Data)
}

View File

@ -1,16 +0,0 @@
package request
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws/awserr"
)
func TestRequestThrottling(t *testing.T) {
req := Request{}
req.Error = awserr.New("Throttling", "", nil)
assert.True(t, req.IsErrorThrottle())
}

View File

@ -1,276 +0,0 @@
package session
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/stretchr/testify/assert"
)
func TestLoadEnvConfig_Creds(t *testing.T) {
env := stashEnv()
defer popEnv(env)
cases := []struct {
Env map[string]string
Val credentials.Value
}{
{
Env: map[string]string{
"AWS_ACCESS_KEY": "AKID",
},
Val: credentials.Value{},
},
{
Env: map[string]string{
"AWS_ACCESS_KEY_ID": "AKID",
},
Val: credentials.Value{},
},
{
Env: map[string]string{
"AWS_SECRET_KEY": "SECRET",
},
Val: credentials.Value{},
},
{
Env: map[string]string{
"AWS_SECRET_ACCESS_KEY": "SECRET",
},
Val: credentials.Value{},
},
{
Env: map[string]string{
"AWS_ACCESS_KEY_ID": "AKID",
"AWS_SECRET_ACCESS_KEY": "SECRET",
},
Val: credentials.Value{
AccessKeyID: "AKID", SecretAccessKey: "SECRET",
ProviderName: "EnvConfigCredentials",
},
},
{
Env: map[string]string{
"AWS_ACCESS_KEY": "AKID",
"AWS_SECRET_KEY": "SECRET",
},
Val: credentials.Value{
AccessKeyID: "AKID", SecretAccessKey: "SECRET",
ProviderName: "EnvConfigCredentials",
},
},
{
Env: map[string]string{
"AWS_ACCESS_KEY": "AKID",
"AWS_SECRET_KEY": "SECRET",
"AWS_SESSION_TOKEN": "TOKEN",
},
Val: credentials.Value{
AccessKeyID: "AKID", SecretAccessKey: "SECRET", SessionToken: "TOKEN",
ProviderName: "EnvConfigCredentials",
},
},
}
for _, c := range cases {
os.Clearenv()
for k, v := range c.Env {
os.Setenv(k, v)
}
cfg := loadEnvConfig()
assert.Equal(t, c.Val, cfg.Creds)
}
}
func TestLoadEnvConfig(t *testing.T) {
env := stashEnv()
defer popEnv(env)
cases := []struct {
Env map[string]string
Region, Profile string
UseSharedConfigCall bool
}{
{
Env: map[string]string{
"AWS_REGION": "region",
"AWS_PROFILE": "profile",
},
Region: "region", Profile: "profile",
},
{
Env: map[string]string{
"AWS_REGION": "region",
"AWS_DEFAULT_REGION": "default_region",
"AWS_PROFILE": "profile",
"AWS_DEFAULT_PROFILE": "default_profile",
},
Region: "region", Profile: "profile",
},
{
Env: map[string]string{
"AWS_REGION": "region",
"AWS_DEFAULT_REGION": "default_region",
"AWS_PROFILE": "profile",
"AWS_DEFAULT_PROFILE": "default_profile",
"AWS_SDK_LOAD_CONFIG": "1",
},
Region: "region", Profile: "profile",
},
{
Env: map[string]string{
"AWS_DEFAULT_REGION": "default_region",
"AWS_DEFAULT_PROFILE": "default_profile",
},
},
{
Env: map[string]string{
"AWS_DEFAULT_REGION": "default_region",
"AWS_DEFAULT_PROFILE": "default_profile",
"AWS_SDK_LOAD_CONFIG": "1",
},
Region: "default_region", Profile: "default_profile",
},
{
Env: map[string]string{
"AWS_REGION": "region",
"AWS_PROFILE": "profile",
},
Region: "region", Profile: "profile",
UseSharedConfigCall: true,
},
{
Env: map[string]string{
"AWS_REGION": "region",
"AWS_DEFAULT_REGION": "default_region",
"AWS_PROFILE": "profile",
"AWS_DEFAULT_PROFILE": "default_profile",
},
Region: "region", Profile: "profile",
UseSharedConfigCall: true,
},
{
Env: map[string]string{
"AWS_REGION": "region",
"AWS_DEFAULT_REGION": "default_region",
"AWS_PROFILE": "profile",
"AWS_DEFAULT_PROFILE": "default_profile",
"AWS_SDK_LOAD_CONFIG": "1",
},
Region: "region", Profile: "profile",
UseSharedConfigCall: true,
},
{
Env: map[string]string{
"AWS_DEFAULT_REGION": "default_region",
"AWS_DEFAULT_PROFILE": "default_profile",
},
Region: "default_region", Profile: "default_profile",
UseSharedConfigCall: true,
},
{
Env: map[string]string{
"AWS_DEFAULT_REGION": "default_region",
"AWS_DEFAULT_PROFILE": "default_profile",
"AWS_SDK_LOAD_CONFIG": "1",
},
Region: "default_region", Profile: "default_profile",
UseSharedConfigCall: true,
},
}
for _, c := range cases {
os.Clearenv()
for k, v := range c.Env {
os.Setenv(k, v)
}
var cfg envConfig
if c.UseSharedConfigCall {
cfg = loadSharedEnvConfig()
} else {
cfg = loadEnvConfig()
}
assert.Equal(t, c.Region, cfg.Region)
assert.Equal(t, c.Profile, cfg.Profile)
}
}
func TestSharedCredsFilename(t *testing.T) {
env := stashEnv()
defer popEnv(env)
os.Setenv("USERPROFILE", "profile_dir")
expect := filepath.Join("profile_dir", ".aws", "credentials")
name := sharedCredentialsFilename()
assert.Equal(t, expect, name)
os.Setenv("HOME", "home_dir")
expect = filepath.Join("home_dir", ".aws", "credentials")
name = sharedCredentialsFilename()
assert.Equal(t, expect, name)
expect = filepath.Join("path/to/credentials/file")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", expect)
name = sharedCredentialsFilename()
assert.Equal(t, expect, name)
}
func TestSharedConfigFilename(t *testing.T) {
env := stashEnv()
defer popEnv(env)
os.Setenv("USERPROFILE", "profile_dir")
expect := filepath.Join("profile_dir", ".aws", "config")
name := sharedConfigFilename()
assert.Equal(t, expect, name)
os.Setenv("HOME", "home_dir")
expect = filepath.Join("home_dir", ".aws", "config")
name = sharedConfigFilename()
assert.Equal(t, expect, name)
expect = filepath.Join("path/to/config/file")
os.Setenv("AWS_CONFIG_FILE", expect)
name = sharedConfigFilename()
assert.Equal(t, expect, name)
}
func TestSetEnvValue(t *testing.T) {
env := stashEnv()
defer popEnv(env)
os.Setenv("empty_key", "")
os.Setenv("second_key", "2")
os.Setenv("third_key", "3")
var dst string
setFromEnvVal(&dst, []string{
"empty_key", "first_key", "second_key", "third_key",
})
assert.Equal(t, "2", dst)
}
func stashEnv() []string {
env := os.Environ()
os.Clearenv()
return env
}
func popEnv(env []string) {
os.Clearenv()
for _, e := range env {
p := strings.SplitN(e, "=", 2)
os.Setenv(p[0], p[1])
}
}

View File

@ -1,344 +0,0 @@
package session
import (
"bytes"
"fmt"
"net/http"
"net/http/httptest"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/service/s3"
)
func TestNewDefaultSession(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
s := New(&aws.Config{Region: aws.String("region")})
assert.Equal(t, "region", *s.Config.Region)
assert.Equal(t, http.DefaultClient, s.Config.HTTPClient)
assert.NotNil(t, s.Config.Logger)
assert.Equal(t, aws.LogOff, *s.Config.LogLevel)
}
func TestNew_WithCustomCreds(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
customCreds := credentials.NewStaticCredentials("AKID", "SECRET", "TOKEN")
s := New(&aws.Config{Credentials: customCreds})
assert.Equal(t, customCreds, s.Config.Credentials)
}
type mockLogger struct {
*bytes.Buffer
}
func (w mockLogger) Log(args ...interface{}) {
fmt.Fprintln(w, args...)
}
func TestNew_WithSessionLoadError(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_SDK_LOAD_CONFIG", "1")
os.Setenv("AWS_CONFIG_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "assume_role_invalid_source_profile")
logger := bytes.Buffer{}
s := New(&aws.Config{Logger: &mockLogger{&logger}})
assert.NotNil(t, s)
svc := s3.New(s)
_, err := svc.ListBuckets(&s3.ListBucketsInput{})
assert.Error(t, err)
assert.Contains(t, logger.String(), "ERROR: failed to create session with AWS_SDK_LOAD_CONFIG enabled")
assert.Contains(t, err.Error(), SharedConfigAssumeRoleError{
RoleARN: "assume_role_invalid_source_profile_role_arn",
}.Error())
}
func TestSessionCopy(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_REGION", "orig_region")
s := Session{
Config: defaults.Config(),
Handlers: defaults.Handlers(),
}
newSess := s.Copy(&aws.Config{Region: aws.String("new_region")})
assert.Equal(t, "orig_region", *s.Config.Region)
assert.Equal(t, "new_region", *newSess.Config.Region)
}
func TestSessionClientConfig(t *testing.T) {
s, err := NewSession(&aws.Config{Region: aws.String("orig_region")})
assert.NoError(t, err)
cfg := s.ClientConfig("s3", &aws.Config{Region: aws.String("us-west-2")})
assert.Equal(t, "https://s3-us-west-2.amazonaws.com", cfg.Endpoint)
assert.Empty(t, cfg.SigningRegion)
assert.Equal(t, "us-west-2", *cfg.Config.Region)
}
func TestNewSession_NoCredentials(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
s, err := NewSession()
assert.NoError(t, err)
assert.NotNil(t, s.Config.Credentials)
assert.NotEqual(t, credentials.AnonymousCredentials, s.Config.Credentials)
}
func TestNewSessionWithOptions_OverrideProfile(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_SDK_LOAD_CONFIG", "1")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "other_profile")
s, err := NewSessionWithOptions(Options{
Profile: "full_profile",
})
assert.NoError(t, err)
assert.Equal(t, "full_profile_region", *s.Config.Region)
creds, err := s.Config.Credentials.Get()
assert.NoError(t, err)
assert.Equal(t, "full_profile_akid", creds.AccessKeyID)
assert.Equal(t, "full_profile_secret", creds.SecretAccessKey)
assert.Empty(t, creds.SessionToken)
assert.Contains(t, creds.ProviderName, "SharedConfigCredentials")
}
func TestNewSessionWithOptions_OverrideSharedConfigEnable(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_SDK_LOAD_CONFIG", "0")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "full_profile")
s, err := NewSessionWithOptions(Options{
SharedConfigState: SharedConfigEnable,
})
assert.NoError(t, err)
assert.Equal(t, "full_profile_region", *s.Config.Region)
creds, err := s.Config.Credentials.Get()
assert.NoError(t, err)
assert.Equal(t, "full_profile_akid", creds.AccessKeyID)
assert.Equal(t, "full_profile_secret", creds.SecretAccessKey)
assert.Empty(t, creds.SessionToken)
assert.Contains(t, creds.ProviderName, "SharedConfigCredentials")
}
func TestNewSessionWithOptions_OverrideSharedConfigDisable(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_SDK_LOAD_CONFIG", "1")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "full_profile")
s, err := NewSessionWithOptions(Options{
SharedConfigState: SharedConfigDisable,
})
assert.NoError(t, err)
assert.Empty(t, *s.Config.Region)
creds, err := s.Config.Credentials.Get()
assert.NoError(t, err)
assert.Equal(t, "full_profile_akid", creds.AccessKeyID)
assert.Equal(t, "full_profile_secret", creds.SecretAccessKey)
assert.Empty(t, creds.SessionToken)
assert.Contains(t, creds.ProviderName, "SharedConfigCredentials")
}
func TestNewSessionWithOptions_Overrides(t *testing.T) {
cases := []struct {
InEnvs map[string]string
InProfile string
OutRegion string
OutCreds credentials.Value
}{
{
InEnvs: map[string]string{
"AWS_SDK_LOAD_CONFIG": "0",
"AWS_SHARED_CREDENTIALS_FILE": testConfigFilename,
"AWS_PROFILE": "other_profile",
},
InProfile: "full_profile",
OutRegion: "full_profile_region",
OutCreds: credentials.Value{
AccessKeyID: "full_profile_akid",
SecretAccessKey: "full_profile_secret",
ProviderName: "SharedConfigCredentials",
},
},
{
InEnvs: map[string]string{
"AWS_SDK_LOAD_CONFIG": "0",
"AWS_SHARED_CREDENTIALS_FILE": testConfigFilename,
"AWS_REGION": "env_region",
"AWS_ACCESS_KEY": "env_akid",
"AWS_SECRET_ACCESS_KEY": "env_secret",
"AWS_PROFILE": "other_profile",
},
InProfile: "full_profile",
OutRegion: "env_region",
OutCreds: credentials.Value{
AccessKeyID: "env_akid",
SecretAccessKey: "env_secret",
ProviderName: "EnvConfigCredentials",
},
},
{
InEnvs: map[string]string{
"AWS_SDK_LOAD_CONFIG": "0",
"AWS_SHARED_CREDENTIALS_FILE": testConfigFilename,
"AWS_CONFIG_FILE": testConfigOtherFilename,
"AWS_PROFILE": "shared_profile",
},
InProfile: "config_file_load_order",
OutRegion: "shared_config_region",
OutCreds: credentials.Value{
AccessKeyID: "shared_config_akid",
SecretAccessKey: "shared_config_secret",
ProviderName: "SharedConfigCredentials",
},
},
}
for _, c := range cases {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
for k, v := range c.InEnvs {
os.Setenv(k, v)
}
s, err := NewSessionWithOptions(Options{
Profile: c.InProfile,
SharedConfigState: SharedConfigEnable,
})
assert.NoError(t, err)
creds, err := s.Config.Credentials.Get()
assert.NoError(t, err)
assert.Equal(t, c.OutRegion, *s.Config.Region)
assert.Equal(t, c.OutCreds.AccessKeyID, creds.AccessKeyID)
assert.Equal(t, c.OutCreds.SecretAccessKey, creds.SecretAccessKey)
assert.Equal(t, c.OutCreds.SessionToken, creds.SessionToken)
assert.Contains(t, creds.ProviderName, c.OutCreds.ProviderName)
}
}
func TestSesisonAssumeRole(t *testing.T) {
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_REGION", "us-east-1")
os.Setenv("AWS_SDK_LOAD_CONFIG", "1")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "assume_role_w_creds")
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
const respMsg = `
<AssumeRoleResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
<AssumeRoleResult>
<AssumedRoleUser>
<Arn>arn:aws:sts::account_id:assumed-role/role/session_name</Arn>
<AssumedRoleId>AKID:session_name</AssumedRoleId>
</AssumedRoleUser>
<Credentials>
<AccessKeyId>AKID</AccessKeyId>
<SecretAccessKey>SECRET</SecretAccessKey>
<SessionToken>SESSION_TOKEN</SessionToken>
<Expiration>%s</Expiration>
</Credentials>
</AssumeRoleResult>
<ResponseMetadata>
<RequestId>request-id</RequestId>
</ResponseMetadata>
</AssumeRoleResponse>
`
w.Write([]byte(fmt.Sprintf(respMsg, time.Now().Add(15*time.Minute).Format("2006-01-02T15:04:05Z"))))
}))
s, err := NewSession(&aws.Config{Endpoint: aws.String(server.URL), DisableSSL: aws.Bool(true)})
creds, err := s.Config.Credentials.Get()
assert.NoError(t, err)
assert.Equal(t, "AKID", creds.AccessKeyID)
assert.Equal(t, "SECRET", creds.SecretAccessKey)
assert.Equal(t, "SESSION_TOKEN", creds.SessionToken)
assert.Contains(t, creds.ProviderName, "AssumeRoleProvider")
}
func TestSessionAssumeRole_DisableSharedConfig(t *testing.T) {
// Backwards compatibility with Shared config disabled
// assume role should not be built into the config.
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_SDK_LOAD_CONFIG", "0")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "assume_role_w_creds")
s, err := NewSession()
assert.NoError(t, err)
creds, err := s.Config.Credentials.Get()
assert.NoError(t, err)
assert.Equal(t, "assume_role_w_creds_akid", creds.AccessKeyID)
assert.Equal(t, "assume_role_w_creds_secret", creds.SecretAccessKey)
assert.Contains(t, creds.ProviderName, "SharedConfigCredentials")
}
func TestSessionAssumeRole_InvalidSourceProfile(t *testing.T) {
// Backwards compatibility with Shared config disabled
// assume role should not be built into the config.
oldEnv := initSessionTestEnv()
defer popEnv(oldEnv)
os.Setenv("AWS_SDK_LOAD_CONFIG", "1")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", testConfigFilename)
os.Setenv("AWS_PROFILE", "assume_role_invalid_source_profile")
s, err := NewSession()
assert.Error(t, err)
assert.Contains(t, err.Error(), "SharedConfigAssumeRoleError: failed to load assume role")
assert.Nil(t, s)
}
func initSessionTestEnv() (oldEnv []string) {
oldEnv = stashEnv()
os.Setenv("AWS_CONFIG_FILE", "file_not_exists")
os.Setenv("AWS_SHARED_CREDENTIALS_FILE", "file_not_exists")
return oldEnv
}

View File

@ -1,264 +0,0 @@
package session
import (
"fmt"
"path/filepath"
"testing"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/go-ini/ini"
"github.com/stretchr/testify/assert"
)
var (
testConfigFilename = filepath.Join("testdata", "shared_config")
testConfigOtherFilename = filepath.Join("testdata", "shared_config_other")
)
func TestLoadSharedConfig(t *testing.T) {
cases := []struct {
Filenames []string
Profile string
Expected sharedConfig
Err error
}{
{
Filenames: []string{"file_not_exists"},
Profile: "default",
},
{
Filenames: []string{testConfigFilename},
Expected: sharedConfig{
Region: "default_region",
},
},
{
Filenames: []string{testConfigOtherFilename, testConfigFilename},
Profile: "config_file_load_order",
Expected: sharedConfig{
Region: "shared_config_region",
Creds: credentials.Value{
AccessKeyID: "shared_config_akid",
SecretAccessKey: "shared_config_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
},
},
{
Filenames: []string{testConfigFilename, testConfigOtherFilename},
Profile: "config_file_load_order",
Expected: sharedConfig{
Region: "shared_config_other_region",
Creds: credentials.Value{
AccessKeyID: "shared_config_other_akid",
SecretAccessKey: "shared_config_other_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigOtherFilename),
},
},
},
{
Filenames: []string{testConfigOtherFilename, testConfigFilename},
Profile: "assume_role",
Expected: sharedConfig{
AssumeRole: assumeRoleConfig{
RoleARN: "assume_role_role_arn",
SourceProfile: "complete_creds",
},
AssumeRoleSource: &sharedConfig{
Creds: credentials.Value{
AccessKeyID: "complete_creds_akid",
SecretAccessKey: "complete_creds_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
},
},
},
{
Filenames: []string{testConfigOtherFilename, testConfigFilename},
Profile: "assume_role_invalid_source_profile",
Expected: sharedConfig{
AssumeRole: assumeRoleConfig{
RoleARN: "assume_role_invalid_source_profile_role_arn",
SourceProfile: "profile_not_exists",
},
},
Err: SharedConfigAssumeRoleError{RoleARN: "assume_role_invalid_source_profile_role_arn"},
},
{
Filenames: []string{testConfigOtherFilename, testConfigFilename},
Profile: "assume_role_w_creds",
Expected: sharedConfig{
Creds: credentials.Value{
AccessKeyID: "assume_role_w_creds_akid",
SecretAccessKey: "assume_role_w_creds_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
AssumeRole: assumeRoleConfig{
RoleARN: "assume_role_w_creds_role_arn",
SourceProfile: "assume_role_w_creds",
ExternalID: "1234",
RoleSessionName: "assume_role_w_creds_session_name",
},
AssumeRoleSource: &sharedConfig{
Creds: credentials.Value{
AccessKeyID: "assume_role_w_creds_akid",
SecretAccessKey: "assume_role_w_creds_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
},
},
},
{
Filenames: []string{testConfigOtherFilename, testConfigFilename},
Profile: "assume_role_wo_creds",
Expected: sharedConfig{
AssumeRole: assumeRoleConfig{
RoleARN: "assume_role_wo_creds_role_arn",
SourceProfile: "assume_role_wo_creds",
},
},
Err: SharedConfigAssumeRoleError{RoleARN: "assume_role_wo_creds_role_arn"},
},
{
Filenames: []string{filepath.Join("testdata", "shared_config_invalid_ini")},
Profile: "profile_name",
Err: SharedConfigLoadError{Filename: filepath.Join("testdata", "shared_config_invalid_ini")},
},
}
for i, c := range cases {
cfg, err := loadSharedConfig(c.Profile, c.Filenames)
if c.Err != nil {
assert.Contains(t, err.Error(), c.Err.Error(), "expected error, %d", i)
continue
}
assert.NoError(t, err, "unexpected error, %d", i)
assert.Equal(t, c.Expected, cfg, "not equal, %d", i)
}
}
func TestLoadSharedConfigFromFile(t *testing.T) {
filename := testConfigFilename
f, err := ini.Load(filename)
if err != nil {
t.Fatalf("failed to load test config file, %s, %v", filename, err)
}
iniFile := sharedConfigFile{IniData: f, Filename: filename}
cases := []struct {
Profile string
Expected sharedConfig
Err error
}{
{
Profile: "default",
Expected: sharedConfig{Region: "default_region"},
},
{
Profile: "alt_profile_name",
Expected: sharedConfig{Region: "alt_profile_name_region"},
},
{
Profile: "short_profile_name_first",
Expected: sharedConfig{Region: "short_profile_name_first_short"},
},
{
Profile: "partial_creds",
Expected: sharedConfig{},
},
{
Profile: "complete_creds",
Expected: sharedConfig{
Creds: credentials.Value{
AccessKeyID: "complete_creds_akid",
SecretAccessKey: "complete_creds_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
},
},
{
Profile: "complete_creds_with_token",
Expected: sharedConfig{
Creds: credentials.Value{
AccessKeyID: "complete_creds_with_token_akid",
SecretAccessKey: "complete_creds_with_token_secret",
SessionToken: "complete_creds_with_token_token",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
},
},
{
Profile: "full_profile",
Expected: sharedConfig{
Creds: credentials.Value{
AccessKeyID: "full_profile_akid",
SecretAccessKey: "full_profile_secret",
ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", testConfigFilename),
},
Region: "full_profile_region",
},
},
{
Profile: "partial_assume_role",
Expected: sharedConfig{},
},
{
Profile: "assume_role",
Expected: sharedConfig{
AssumeRole: assumeRoleConfig{
RoleARN: "assume_role_role_arn",
SourceProfile: "complete_creds",
},
},
},
{
Profile: "does_not_exists",
Err: SharedConfigProfileNotExistsError{Profile: "does_not_exists"},
},
}
for i, c := range cases {
cfg := sharedConfig{}
err := cfg.setFromIniFile(c.Profile, iniFile)
if c.Err != nil {
assert.Contains(t, err.Error(), c.Err.Error(), "expected error, %d", i)
continue
}
assert.NoError(t, err, "unexpected error, %d", i)
assert.Equal(t, c.Expected, cfg, "not equal, %d", i)
}
}
func TestLoadSharedConfigIniFiles(t *testing.T) {
cases := []struct {
Filenames []string
Expected []sharedConfigFile
}{
{
Filenames: []string{"not_exists", testConfigFilename},
Expected: []sharedConfigFile{
{Filename: testConfigFilename},
},
},
{
Filenames: []string{testConfigFilename, testConfigOtherFilename},
Expected: []sharedConfigFile{
{Filename: testConfigFilename},
{Filename: testConfigOtherFilename},
},
},
}
for i, c := range cases {
files, err := loadSharedConfigIniFiles(c.Filenames)
assert.NoError(t, err, "unexpected error, %d", i)
assert.Equal(t, len(c.Expected), len(files), "expected num files, %d", i)
for i, expectedFile := range c.Expected {
assert.Equal(t, expectedFile.Filename, files[i].Filename)
}
}
}

View File

@ -1,77 +0,0 @@
package v4_test
import (
"net/http"
"net/url"
"testing"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/awstesting/unit"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/stretchr/testify/assert"
)
func TestPresignHandler(t *testing.T) {
svc := s3.New(unit.Session)
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String("bucket"),
Key: aws.String("key"),
ContentDisposition: aws.String("a+b c$d"),
ACL: aws.String("public-read"),
})
req.Time = time.Unix(0, 0)
urlstr, err := req.Presign(5 * time.Minute)
assert.NoError(t, err)
expectedDate := "19700101T000000Z"
expectedHeaders := "content-disposition;host;x-amz-acl"
expectedSig := "b2754ba8ffeb74a40b94767017e24c4672107d6d5a894648d5d332ca61f5ffe4"
expectedCred := "AKID/19700101/mock-region/s3/aws4_request"
u, _ := url.Parse(urlstr)
urlQ := u.Query()
assert.Equal(t, expectedSig, urlQ.Get("X-Amz-Signature"))
assert.Equal(t, expectedCred, urlQ.Get("X-Amz-Credential"))
assert.Equal(t, expectedHeaders, urlQ.Get("X-Amz-SignedHeaders"))
assert.Equal(t, expectedDate, urlQ.Get("X-Amz-Date"))
assert.Equal(t, "300", urlQ.Get("X-Amz-Expires"))
assert.NotContains(t, urlstr, "+") // + encoded as %20
}
func TestPresignRequest(t *testing.T) {
svc := s3.New(unit.Session)
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String("bucket"),
Key: aws.String("key"),
ContentDisposition: aws.String("a+b c$d"),
ACL: aws.String("public-read"),
})
req.Time = time.Unix(0, 0)
urlstr, headers, err := req.PresignRequest(5 * time.Minute)
assert.NoError(t, err)
expectedDate := "19700101T000000Z"
expectedHeaders := "content-disposition;host;x-amz-acl;x-amz-content-sha256"
expectedSig := "0d200ba61501d752acd06f39ef4dbe7d83ffd5ea15978dc3476dfc00b8eb574e"
expectedCred := "AKID/19700101/mock-region/s3/aws4_request"
expectedHeaderMap := http.Header{
"x-amz-acl": []string{"public-read"},
"content-disposition": []string{"a+b c$d"},
"x-amz-content-sha256": []string{"UNSIGNED-PAYLOAD"},
}
u, _ := url.Parse(urlstr)
urlQ := u.Query()
assert.Equal(t, expectedSig, urlQ.Get("X-Amz-Signature"))
assert.Equal(t, expectedCred, urlQ.Get("X-Amz-Credential"))
assert.Equal(t, expectedHeaders, urlQ.Get("X-Amz-SignedHeaders"))
assert.Equal(t, expectedDate, urlQ.Get("X-Amz-Date"))
assert.Equal(t, expectedHeaderMap, headers)
assert.Equal(t, "300", urlQ.Get("X-Amz-Expires"))
assert.NotContains(t, urlstr, "+") // + encoded as %20
}

View File

@ -1,57 +0,0 @@
package v4
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestRuleCheckWhitelist(t *testing.T) {
w := whitelist{
mapRule{
"Cache-Control": struct{}{},
},
}
assert.True(t, w.IsValid("Cache-Control"))
assert.False(t, w.IsValid("Cache-"))
}
func TestRuleCheckBlacklist(t *testing.T) {
b := blacklist{
mapRule{
"Cache-Control": struct{}{},
},
}
assert.False(t, b.IsValid("Cache-Control"))
assert.True(t, b.IsValid("Cache-"))
}
func TestRuleCheckPattern(t *testing.T) {
p := patterns{"X-Amz-Meta-"}
assert.True(t, p.IsValid("X-Amz-Meta-"))
assert.True(t, p.IsValid("X-Amz-Meta-Star"))
assert.False(t, p.IsValid("Cache-"))
}
func TestRuleComplexWhitelist(t *testing.T) {
w := rules{
whitelist{
mapRule{
"Cache-Control": struct{}{},
},
},
patterns{"X-Amz-Meta-"},
}
r := rules{
inclusiveRules{patterns{"X-Amz-"}, blacklist{w}},
}
assert.True(t, r.IsValid("X-Amz-Blah"))
assert.False(t, r.IsValid("X-Amz-Meta-"))
assert.False(t, r.IsValid("X-Amz-Meta-Star"))
assert.False(t, r.IsValid("Cache-Control"))
}

View File

@ -1,353 +0,0 @@
package v4
import (
"io"
"net/http"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting"
)
func TestStripExcessHeaders(t *testing.T) {
vals := []string{
"123",
"1 2 3",
" 1 2 3",
"1 2 3",
"1 23",
"1 2 3",
"1 2 ",
" 1 2 ",
}
expected := []string{
"123",
"1 2 3",
"1 2 3",
"1 2 3",
"1 23",
"1 2 3",
"1 2",
"1 2",
}
newVals := stripExcessSpaces(vals)
for i := 0; i < len(newVals); i++ {
assert.Equal(t, expected[i], newVals[i], "test: %d", i)
}
}
func buildRequest(serviceName, region, body string) (*http.Request, io.ReadSeeker) {
endpoint := "https://" + serviceName + "." + region + ".amazonaws.com"
reader := strings.NewReader(body)
req, _ := http.NewRequest("POST", endpoint, reader)
req.URL.Opaque = "//example.org/bucket/key-._~,!@#$%^&*()"
req.Header.Add("X-Amz-Target", "prefix.Operation")
req.Header.Add("Content-Type", "application/x-amz-json-1.0")
req.Header.Add("Content-Length", string(len(body)))
req.Header.Add("X-Amz-Meta-Other-Header", "some-value=!@#$%^&* (+)")
req.Header.Add("X-Amz-Meta-Other-Header_With_Underscore", "some-value=!@#$%^&* (+)")
req.Header.Add("X-amz-Meta-Other-Header_With_Underscore", "some-value=!@#$%^&* (+)")
return req, reader
}
func buildSigner() Signer {
return Signer{
Credentials: credentials.NewStaticCredentials("AKID", "SECRET", "SESSION"),
}
}
func removeWS(text string) string {
text = strings.Replace(text, " ", "", -1)
text = strings.Replace(text, "\n", "", -1)
text = strings.Replace(text, "\t", "", -1)
return text
}
func assertEqual(t *testing.T, expected, given string) {
if removeWS(expected) != removeWS(given) {
t.Errorf("\nExpected: %s\nGiven: %s", expected, given)
}
}
func TestPresignRequest(t *testing.T) {
req, body := buildRequest("dynamodb", "us-east-1", "{}")
signer := buildSigner()
signer.Presign(req, body, "dynamodb", "us-east-1", 300*time.Second, time.Unix(0, 0))
expectedDate := "19700101T000000Z"
expectedHeaders := "content-length;content-type;host;x-amz-meta-other-header;x-amz-meta-other-header_with_underscore"
expectedSig := "ea7856749041f727690c580569738282e99c79355fe0d8f125d3b5535d2ece83"
expectedCred := "AKID/19700101/us-east-1/dynamodb/aws4_request"
expectedTarget := "prefix.Operation"
q := req.URL.Query()
assert.Equal(t, expectedSig, q.Get("X-Amz-Signature"))
assert.Equal(t, expectedCred, q.Get("X-Amz-Credential"))
assert.Equal(t, expectedHeaders, q.Get("X-Amz-SignedHeaders"))
assert.Equal(t, expectedDate, q.Get("X-Amz-Date"))
assert.Empty(t, q.Get("X-Amz-Meta-Other-Header"))
assert.Equal(t, expectedTarget, q.Get("X-Amz-Target"))
}
func TestSignRequest(t *testing.T) {
req, body := buildRequest("dynamodb", "us-east-1", "{}")
signer := buildSigner()
signer.Sign(req, body, "dynamodb", "us-east-1", time.Unix(0, 0))
expectedDate := "19700101T000000Z"
expectedSig := "AWS4-HMAC-SHA256 Credential=AKID/19700101/us-east-1/dynamodb/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date;x-amz-meta-other-header;x-amz-meta-other-header_with_underscore;x-amz-security-token;x-amz-target, Signature=ea766cabd2ec977d955a3c2bae1ae54f4515d70752f2207618396f20aa85bd21"
q := req.Header
assert.Equal(t, expectedSig, q.Get("Authorization"))
assert.Equal(t, expectedDate, q.Get("X-Amz-Date"))
}
func TestSignBodyS3(t *testing.T) {
req, body := buildRequest("s3", "us-east-1", "hello")
signer := buildSigner()
signer.Sign(req, body, "s3", "us-east-1", time.Now())
hash := req.Header.Get("X-Amz-Content-Sha256")
assert.Equal(t, "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824", hash)
}
func TestSignBodyGlacier(t *testing.T) {
req, body := buildRequest("glacier", "us-east-1", "hello")
signer := buildSigner()
signer.Sign(req, body, "glacier", "us-east-1", time.Now())
hash := req.Header.Get("X-Amz-Content-Sha256")
assert.Equal(t, "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824", hash)
}
func TestPresignEmptyBodyS3(t *testing.T) {
req, body := buildRequest("s3", "us-east-1", "hello")
signer := buildSigner()
signer.Presign(req, body, "s3", "us-east-1", 5*time.Minute, time.Now())
hash := req.Header.Get("X-Amz-Content-Sha256")
assert.Equal(t, "UNSIGNED-PAYLOAD", hash)
}
func TestSignPrecomputedBodyChecksum(t *testing.T) {
req, body := buildRequest("dynamodb", "us-east-1", "hello")
req.Header.Set("X-Amz-Content-Sha256", "PRECOMPUTED")
signer := buildSigner()
signer.Sign(req, body, "dynamodb", "us-east-1", time.Now())
hash := req.Header.Get("X-Amz-Content-Sha256")
assert.Equal(t, "PRECOMPUTED", hash)
}
func TestAnonymousCredentials(t *testing.T) {
svc := awstesting.NewClient(&aws.Config{Credentials: credentials.AnonymousCredentials})
r := svc.NewRequest(
&request.Operation{
Name: "BatchGetItem",
HTTPMethod: "POST",
HTTPPath: "/",
},
nil,
nil,
)
SignSDKRequest(r)
urlQ := r.HTTPRequest.URL.Query()
assert.Empty(t, urlQ.Get("X-Amz-Signature"))
assert.Empty(t, urlQ.Get("X-Amz-Credential"))
assert.Empty(t, urlQ.Get("X-Amz-SignedHeaders"))
assert.Empty(t, urlQ.Get("X-Amz-Date"))
hQ := r.HTTPRequest.Header
assert.Empty(t, hQ.Get("Authorization"))
assert.Empty(t, hQ.Get("X-Amz-Date"))
}
func TestIgnoreResignRequestWithValidCreds(t *testing.T) {
svc := awstesting.NewClient(&aws.Config{
Credentials: credentials.NewStaticCredentials("AKID", "SECRET", "SESSION"),
Region: aws.String("us-west-2"),
})
r := svc.NewRequest(
&request.Operation{
Name: "BatchGetItem",
HTTPMethod: "POST",
HTTPPath: "/",
},
nil,
nil,
)
SignSDKRequest(r)
sig := r.HTTPRequest.Header.Get("Authorization")
SignSDKRequest(r)
assert.Equal(t, sig, r.HTTPRequest.Header.Get("Authorization"))
}
func TestIgnorePreResignRequestWithValidCreds(t *testing.T) {
svc := awstesting.NewClient(&aws.Config{
Credentials: credentials.NewStaticCredentials("AKID", "SECRET", "SESSION"),
Region: aws.String("us-west-2"),
})
r := svc.NewRequest(
&request.Operation{
Name: "BatchGetItem",
HTTPMethod: "POST",
HTTPPath: "/",
},
nil,
nil,
)
r.ExpireTime = time.Minute * 10
SignSDKRequest(r)
sig := r.HTTPRequest.Header.Get("X-Amz-Signature")
SignSDKRequest(r)
assert.Equal(t, sig, r.HTTPRequest.Header.Get("X-Amz-Signature"))
}
func TestResignRequestExpiredCreds(t *testing.T) {
creds := credentials.NewStaticCredentials("AKID", "SECRET", "SESSION")
svc := awstesting.NewClient(&aws.Config{Credentials: creds})
r := svc.NewRequest(
&request.Operation{
Name: "BatchGetItem",
HTTPMethod: "POST",
HTTPPath: "/",
},
nil,
nil,
)
SignSDKRequest(r)
querySig := r.HTTPRequest.Header.Get("Authorization")
var origSignedHeaders string
for _, p := range strings.Split(querySig, ", ") {
if strings.HasPrefix(p, "SignedHeaders=") {
origSignedHeaders = p[len("SignedHeaders="):]
break
}
}
assert.NotEmpty(t, origSignedHeaders)
assert.NotContains(t, origSignedHeaders, "authorization")
origSignedAt := r.LastSignedAt
creds.Expire()
signSDKRequestWithCurrTime(r, func() time.Time {
// Simulate one second has passed so that signature's date changes
// when it is resigned.
return time.Now().Add(1 * time.Second)
})
updatedQuerySig := r.HTTPRequest.Header.Get("Authorization")
assert.NotEqual(t, querySig, updatedQuerySig)
var updatedSignedHeaders string
for _, p := range strings.Split(updatedQuerySig, ", ") {
if strings.HasPrefix(p, "SignedHeaders=") {
updatedSignedHeaders = p[len("SignedHeaders="):]
break
}
}
assert.NotEmpty(t, updatedSignedHeaders)
assert.NotContains(t, updatedQuerySig, "authorization")
assert.NotEqual(t, origSignedAt, r.LastSignedAt)
}
func TestPreResignRequestExpiredCreds(t *testing.T) {
provider := &credentials.StaticProvider{Value: credentials.Value{
AccessKeyID: "AKID",
SecretAccessKey: "SECRET",
SessionToken: "SESSION",
}}
creds := credentials.NewCredentials(provider)
svc := awstesting.NewClient(&aws.Config{Credentials: creds})
r := svc.NewRequest(
&request.Operation{
Name: "BatchGetItem",
HTTPMethod: "POST",
HTTPPath: "/",
},
nil,
nil,
)
r.ExpireTime = time.Minute * 10
SignSDKRequest(r)
querySig := r.HTTPRequest.URL.Query().Get("X-Amz-Signature")
signedHeaders := r.HTTPRequest.URL.Query().Get("X-Amz-SignedHeaders")
assert.NotEmpty(t, signedHeaders)
origSignedAt := r.LastSignedAt
creds.Expire()
signSDKRequestWithCurrTime(r, func() time.Time {
// Simulate the request occurred 15 minutes in the past
return time.Now().Add(-48 * time.Hour)
})
assert.NotEqual(t, querySig, r.HTTPRequest.URL.Query().Get("X-Amz-Signature"))
resignedHeaders := r.HTTPRequest.URL.Query().Get("X-Amz-SignedHeaders")
assert.Equal(t, signedHeaders, resignedHeaders)
assert.NotContains(t, signedHeaders, "x-amz-signedHeaders")
assert.NotEqual(t, origSignedAt, r.LastSignedAt)
}
func TestResignRequestExpiredRequest(t *testing.T) {
creds := credentials.NewStaticCredentials("AKID", "SECRET", "SESSION")
svc := awstesting.NewClient(&aws.Config{Credentials: creds})
r := svc.NewRequest(
&request.Operation{
Name: "BatchGetItem",
HTTPMethod: "POST",
HTTPPath: "/",
},
nil,
nil,
)
SignSDKRequest(r)
querySig := r.HTTPRequest.Header.Get("Authorization")
origSignedAt := r.LastSignedAt
signSDKRequestWithCurrTime(r, func() time.Time {
// Simulate the request occurred 15 minutes in the past
return time.Now().Add(15 * time.Minute)
})
assert.NotEqual(t, querySig, r.HTTPRequest.Header.Get("Authorization"))
assert.NotEqual(t, origSignedAt, r.LastSignedAt)
}
func BenchmarkPresignRequest(b *testing.B) {
signer := buildSigner()
req, body := buildRequest("dynamodb", "us-east-1", "{}")
for i := 0; i < b.N; i++ {
signer.Presign(req, body, "dynamodb", "us-east-1", 300*time.Second, time.Now())
}
}
func BenchmarkSignRequest(b *testing.B) {
signer := buildSigner()
req, body := buildRequest("dynamodb", "us-east-1", "{}")
for i := 0; i < b.N; i++ {
signer.Sign(req, body, "dynamodb", "us-east-1", time.Now())
}
}
func BenchmarkStripExcessSpaces(b *testing.B) {
vals := []string{
`AWS4-HMAC-SHA256 Credential=AKIDFAKEIDFAKEID/20160628/us-west-2/s3/aws4_request, SignedHeaders=host;x-amz-date, Signature=1234567890abcdef1234567890abcdef1234567890abcdef`,
`123 321 123 321`,
` 123 321 123 321 `,
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
stripExcessSpaces(vals)
}
}

View File

@ -1,75 +0,0 @@
package aws
import (
"math/rand"
"testing"
"github.com/stretchr/testify/assert"
)
func TestWriteAtBuffer(t *testing.T) {
b := &WriteAtBuffer{}
n, err := b.WriteAt([]byte{1}, 0)
assert.NoError(t, err)
assert.Equal(t, 1, n)
n, err = b.WriteAt([]byte{1, 1, 1}, 5)
assert.NoError(t, err)
assert.Equal(t, 3, n)
n, err = b.WriteAt([]byte{2}, 1)
assert.NoError(t, err)
assert.Equal(t, 1, n)
n, err = b.WriteAt([]byte{3}, 2)
assert.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, []byte{1, 2, 3, 0, 0, 1, 1, 1}, b.Bytes())
}
func BenchmarkWriteAtBuffer(b *testing.B) {
buf := &WriteAtBuffer{}
r := rand.New(rand.NewSource(1))
b.ResetTimer()
for i := 0; i < b.N; i++ {
to := r.Intn(10) * 4096
bs := make([]byte, to)
buf.WriteAt(bs, r.Int63n(10)*4096)
}
}
func BenchmarkWriteAtBufferOrderedWrites(b *testing.B) {
// test the performance of a WriteAtBuffer when written in an
// ordered fashion. This is similar to the behavior of the
// s3.Downloader, since downloads the first chunk of the file, then
// the second, and so on.
//
// This test simulates a 150MB file being written in 30 ordered 5MB chunks.
chunk := int64(5e6)
max := chunk * 30
// we'll write the same 5MB chunk every time
tmp := make([]byte, chunk)
for i := 0; i < b.N; i++ {
buf := &WriteAtBuffer{}
for i := int64(0); i < max; i += chunk {
buf.WriteAt(tmp, i)
}
}
}
func BenchmarkWriteAtBufferParallel(b *testing.B) {
buf := &WriteAtBuffer{}
r := rand.New(rand.NewSource(1))
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
to := r.Intn(10) * 4096
bs := make([]byte, to)
buf.WriteAt(bs, r.Int63n(10)*4096)
}
})
}

View File

@ -1,19 +0,0 @@
{
"ImportPath": "github.com/aws/aws-sdk-go/awsmigrate/awsmigrate-renamer",
"GoVersion": "go1.6",
"GodepVersion": "v60",
"Deps": [
{
"ImportPath": "golang.org/x/tools/go/ast/astutil",
"Rev": "b75b3f5cd5d50fbb1fb88ce784d2e7cca17bba8a"
},
{
"ImportPath": "golang.org/x/tools/go/buildutil",
"Rev": "b75b3f5cd5d50fbb1fb88ce784d2e7cca17bba8a"
},
{
"ImportPath": "golang.org/x/tools/go/loader",
"Rev": "b75b3f5cd5d50fbb1fb88ce784d2e7cca17bba8a"
}
]
}

View File

@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

View File

@ -1,200 +0,0 @@
// +build go1.5
package main
import (
"bytes"
"go/format"
"io"
"os"
"path/filepath"
"sort"
"strings"
"text/template"
"github.com/aws/aws-sdk-go/private/model/api"
)
type pkg struct {
oldAPI *api.API
newAPI *api.API
shapes map[string]*shapentry
operations map[string]*opentry
}
type shapentry struct {
oldShape *api.Shape
newShape *api.Shape
}
type opentry struct {
oldName string
newName string
}
type packageRenames struct {
Shapes map[string]string
Operations map[string]string
Fields map[string]string
}
var exportMap = map[string]*packageRenames{}
func generateRenames(w io.Writer) error {
tmpl, err := template.New("renames").Parse(t)
if err != nil {
return err
}
out := bytes.NewBuffer(nil)
if err = tmpl.Execute(out, exportMap); err != nil {
return err
}
b, err := format.Source(bytes.TrimSpace(out.Bytes()))
if err != nil {
return err
}
_, err = io.Copy(w, bytes.NewReader(b))
return err
}
const t = `
package rename
// THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
var renamedPackages = map[string]*packageRenames{
{{ range $key, $entry := . }}"{{ $key }}": &packageRenames{
operations: map[string]string{
{{ range $old, $new := $entry.Operations }}"{{ $old }}": "{{ $new }}",
{{ end }}
},
shapes: map[string]string{
{{ range $old, $new := $entry.Shapes }}"{{ $old }}": "{{ $new }}",
{{ end }}
},
fields: map[string]string{
{{ range $old, $new := $entry.Fields }}"{{ $old }}": "{{ $new }}",
{{ end }}
},
},
{{ end }}
}
`
func (p *pkg) buildRenames() {
pkgName := "github.com/aws/aws-sdk-go/service/" + p.oldAPI.PackageName()
if exportMap[pkgName] == nil {
exportMap[pkgName] = &packageRenames{map[string]string{}, map[string]string{}, map[string]string{}}
}
ifacename := "github.com/aws/aws-sdk-go/service/" + p.oldAPI.PackageName() + "/" +
p.oldAPI.InterfacePackageName()
if exportMap[ifacename] == nil {
exportMap[ifacename] = &packageRenames{map[string]string{}, map[string]string{}, map[string]string{}}
}
for _, entry := range p.operations {
if entry.oldName != entry.newName {
pkgNames := []string{pkgName, ifacename}
for _, p := range pkgNames {
exportMap[p].Operations[entry.oldName] = entry.newName
exportMap[p].Operations[entry.oldName+"Request"] = entry.newName + "Request"
exportMap[p].Operations[entry.oldName+"Pages"] = entry.newName + "Pages"
}
}
}
for _, entry := range p.shapes {
if entry.oldShape.Type == "structure" {
if entry.oldShape.ShapeName != entry.newShape.ShapeName {
exportMap[pkgName].Shapes[entry.oldShape.ShapeName] = entry.newShape.ShapeName
}
for _, n := range entry.oldShape.MemberNames() {
for _, m := range entry.newShape.MemberNames() {
if n != m && strings.ToLower(n) == strings.ToLower(m) {
exportMap[pkgName].Fields[n] = m
}
}
}
}
}
}
func load(file string) *pkg {
p := &pkg{&api.API{}, &api.API{}, map[string]*shapentry{}, map[string]*opentry{}}
p.oldAPI.Attach(file)
p.oldAPI.Setup()
p.newAPI.Attach(file)
p.newAPI.Setup()
for _, name := range p.oldAPI.OperationNames() {
p.operations[strings.ToLower(name)] = &opentry{oldName: name}
}
for _, name := range p.newAPI.OperationNames() {
p.operations[strings.ToLower(name)].newName = name
}
for _, shape := range p.oldAPI.ShapeList() {
p.shapes[strings.ToLower(shape.ShapeName)] = &shapentry{oldShape: shape}
}
for _, shape := range p.newAPI.ShapeList() {
if _, ok := p.shapes[strings.ToLower(shape.ShapeName)]; !ok {
panic("missing shape " + shape.ShapeName)
}
p.shapes[strings.ToLower(shape.ShapeName)].newShape = shape
}
return p
}
var excludeServices = map[string]struct{}{
"simpledb": {},
"importexport": {},
}
func main() {
files, _ := filepath.Glob("../../apis/*/*/api-2.json")
sort.Strings(files)
// Remove old API versions from list
m := map[string]bool{}
for i := range files {
idx := len(files) - 1 - i
parts := strings.Split(files[idx], string(filepath.Separator))
svc := parts[len(parts)-3] // service name is 2nd-to-last component
if m[svc] {
files[idx] = "" // wipe this one out if we already saw the service
}
m[svc] = true
}
for i := range files {
file := files[i]
if file == "" { // empty file
continue
}
if g := load(file); g != nil {
if _, ok := excludeServices[g.oldAPI.PackageName()]; !ok {
g.buildRenames()
}
}
}
outfile, err := os.Create("rename/renames.go")
if err != nil {
panic(err)
}
if err := generateRenames(outfile); err != nil {
panic(err)
}
}

View File

@ -1,116 +0,0 @@
// +build go1.5
package rename
import (
"bytes"
"flag"
"fmt"
"go/format"
"go/parser"
"go/token"
"go/types"
"io/ioutil"
"golang.org/x/tools/go/loader"
)
var dryRun = flag.Bool("dryrun", false, "Dry run")
var verbose = flag.Bool("verbose", false, "Verbose")
type packageRenames struct {
operations map[string]string
shapes map[string]string
fields map[string]string
}
type renamer struct {
*loader.Program
files map[*token.File]bool
}
// ParsePathsFromArgs parses arguments from command line and looks at import
// paths to rename objects.
func ParsePathsFromArgs() {
flag.Parse()
for _, dir := range flag.Args() {
var conf loader.Config
conf.ParserMode = parser.ParseComments
conf.ImportWithTests(dir)
prog, err := conf.Load()
if err != nil {
panic(err)
}
r := renamer{prog, map[*token.File]bool{}}
r.parse()
if !*dryRun {
r.write()
}
}
}
func (r *renamer) dryInfo() string {
if *dryRun {
return "[DRY-RUN]"
}
return "[!]"
}
func (r *renamer) printf(msg string, args ...interface{}) {
if *verbose {
fmt.Printf(msg, args...)
}
}
func (r *renamer) parse() {
for _, pkg := range r.InitialPackages() {
r.parseUses(pkg)
}
}
func (r *renamer) write() {
for _, pkg := range r.InitialPackages() {
for _, f := range pkg.Files {
tokenFile := r.Fset.File(f.Pos())
if r.files[tokenFile] {
var buf bytes.Buffer
format.Node(&buf, r.Fset, f)
if err := ioutil.WriteFile(tokenFile.Name(), buf.Bytes(), 0644); err != nil {
panic(err)
}
}
}
}
}
func (r *renamer) parseUses(pkg *loader.PackageInfo) {
for k, v := range pkg.Uses {
if v.Pkg() != nil {
pkgPath := v.Pkg().Path()
if renames, ok := renamedPackages[pkgPath]; ok {
name := k.Name
switch t := v.(type) {
case *types.Func:
if newName, ok := renames.operations[t.Name()]; ok && newName != name {
r.printf("%s Rename [OPERATION]: %q -> %q\n", r.dryInfo(), name, newName)
r.files[r.Fset.File(k.Pos())] = true
k.Name = newName
}
case *types.TypeName:
if newName, ok := renames.shapes[name]; ok && newName != name {
r.printf("%s Rename [SHAPE]: %q -> %q\n", r.dryInfo(), t.Name(), newName)
r.files[r.Fset.File(k.Pos())] = true
k.Name = newName
}
case *types.Var:
if newName, ok := renames.fields[name]; ok && newName != name {
r.printf("%s Rename [FIELD]: %q -> %q\n", r.dryInfo(), t.Name(), newName)
r.files[r.Fset.File(k.Pos())] = true
k.Name = newName
}
}
}
}
}
}

View File

@ -1,45 +0,0 @@
// +build go1.5
package main
//go:generate go run gen/gen.go
import (
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/aws/aws-sdk-go/awsmigrate/awsmigrate-renamer/rename"
)
var safeTag = "4e554f77f00d527b452c68a46f2e68595284121b"
func main() {
gopath := os.Getenv("GOPATH")
if gopath == "" {
panic("GOPATH not set!")
}
gopath = strings.Split(gopath, ":")[0]
// change directory to SDK
err := os.Chdir(filepath.Join(gopath, "src", "github.com", "aws", "aws-sdk-go"))
if err != nil {
panic("Cannot find SDK repository")
}
// store orig HEAD
head, err := exec.Command("git", "rev-parse", "--abbrev-ref", "HEAD").Output()
if err != nil {
panic("Cannot find SDK repository")
}
origHEAD := strings.Trim(string(head), " \r\n")
// checkout to safe tag and run conversion
exec.Command("git", "checkout", safeTag).Run()
defer func() {
exec.Command("git", "checkout", origHEAD).Run()
}()
rename.ParsePathsFromArgs()
}

View File

@ -1,624 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package astutil
// This file defines utilities for working with source positions.
import (
"fmt"
"go/ast"
"go/token"
"sort"
)
// PathEnclosingInterval returns the node that encloses the source
// interval [start, end), and all its ancestors up to the AST root.
//
// The definition of "enclosing" used by this function considers
// additional whitespace abutting a node to be enclosed by it.
// In this example:
//
// z := x + y // add them
// <-A->
// <----B----->
//
// the ast.BinaryExpr(+) node is considered to enclose interval B
// even though its [Pos()..End()) is actually only interval A.
// This behaviour makes user interfaces more tolerant of imperfect
// input.
//
// This function treats tokens as nodes, though they are not included
// in the result. e.g. PathEnclosingInterval("+") returns the
// enclosing ast.BinaryExpr("x + y").
//
// If start==end, the 1-char interval following start is used instead.
//
// The 'exact' result is true if the interval contains only path[0]
// and perhaps some adjacent whitespace. It is false if the interval
// overlaps multiple children of path[0], or if it contains only
// interior whitespace of path[0].
// In this example:
//
// z := x + y // add them
// <--C--> <---E-->
// ^
// D
//
// intervals C, D and E are inexact. C is contained by the
// z-assignment statement, because it spans three of its children (:=,
// x, +). So too is the 1-char interval D, because it contains only
// interior whitespace of the assignment. E is considered interior
// whitespace of the BlockStmt containing the assignment.
//
// Precondition: [start, end) both lie within the same file as root.
// TODO(adonovan): return (nil, false) in this case and remove precond.
// Requires FileSet; see loader.tokenFileContainsPos.
//
// Postcondition: path is never nil; it always contains at least 'root'.
//
func PathEnclosingInterval(root *ast.File, start, end token.Pos) (path []ast.Node, exact bool) {
// fmt.Printf("EnclosingInterval %d %d\n", start, end) // debugging
// Precondition: node.[Pos..End) and adjoining whitespace contain [start, end).
var visit func(node ast.Node) bool
visit = func(node ast.Node) bool {
path = append(path, node)
nodePos := node.Pos()
nodeEnd := node.End()
// fmt.Printf("visit(%T, %d, %d)\n", node, nodePos, nodeEnd) // debugging
// Intersect [start, end) with interval of node.
if start < nodePos {
start = nodePos
}
if end > nodeEnd {
end = nodeEnd
}
// Find sole child that contains [start, end).
children := childrenOf(node)
l := len(children)
for i, child := range children {
// [childPos, childEnd) is unaugmented interval of child.
childPos := child.Pos()
childEnd := child.End()
// [augPos, augEnd) is whitespace-augmented interval of child.
augPos := childPos
augEnd := childEnd
if i > 0 {
augPos = children[i-1].End() // start of preceding whitespace
}
if i < l-1 {
nextChildPos := children[i+1].Pos()
// Does [start, end) lie between child and next child?
if start >= augEnd && end <= nextChildPos {
return false // inexact match
}
augEnd = nextChildPos // end of following whitespace
}
// fmt.Printf("\tchild %d: [%d..%d)\tcontains interval [%d..%d)?\n",
// i, augPos, augEnd, start, end) // debugging
// Does augmented child strictly contain [start, end)?
if augPos <= start && end <= augEnd {
_, isToken := child.(tokenNode)
return isToken || visit(child)
}
// Does [start, end) overlap multiple children?
// i.e. left-augmented child contains start
// but LR-augmented child does not contain end.
if start < childEnd && end > augEnd {
break
}
}
// No single child contained [start, end),
// so node is the result. Is it exact?
// (It's tempting to put this condition before the
// child loop, but it gives the wrong result in the
// case where a node (e.g. ExprStmt) and its sole
// child have equal intervals.)
if start == nodePos && end == nodeEnd {
return true // exact match
}
return false // inexact: overlaps multiple children
}
if start > end {
start, end = end, start
}
if start < root.End() && end > root.Pos() {
if start == end {
end = start + 1 // empty interval => interval of size 1
}
exact = visit(root)
// Reverse the path:
for i, l := 0, len(path); i < l/2; i++ {
path[i], path[l-1-i] = path[l-1-i], path[i]
}
} else {
// Selection lies within whitespace preceding the
// first (or following the last) declaration in the file.
// The result nonetheless always includes the ast.File.
path = append(path, root)
}
return
}
// tokenNode is a dummy implementation of ast.Node for a single token.
// They are used transiently by PathEnclosingInterval but never escape
// this package.
//
type tokenNode struct {
pos token.Pos
end token.Pos
}
func (n tokenNode) Pos() token.Pos {
return n.pos
}
func (n tokenNode) End() token.Pos {
return n.end
}
func tok(pos token.Pos, len int) ast.Node {
return tokenNode{pos, pos + token.Pos(len)}
}
// childrenOf returns the direct non-nil children of ast.Node n.
// It may include fake ast.Node implementations for bare tokens.
// it is not safe to call (e.g.) ast.Walk on such nodes.
//
func childrenOf(n ast.Node) []ast.Node {
var children []ast.Node
// First add nodes for all true subtrees.
ast.Inspect(n, func(node ast.Node) bool {
if node == n { // push n
return true // recur
}
if node != nil { // push child
children = append(children, node)
}
return false // no recursion
})
// Then add fake Nodes for bare tokens.
switch n := n.(type) {
case *ast.ArrayType:
children = append(children,
tok(n.Lbrack, len("[")),
tok(n.Elt.End(), len("]")))
case *ast.AssignStmt:
children = append(children,
tok(n.TokPos, len(n.Tok.String())))
case *ast.BasicLit:
children = append(children,
tok(n.ValuePos, len(n.Value)))
case *ast.BinaryExpr:
children = append(children, tok(n.OpPos, len(n.Op.String())))
case *ast.BlockStmt:
children = append(children,
tok(n.Lbrace, len("{")),
tok(n.Rbrace, len("}")))
case *ast.BranchStmt:
children = append(children,
tok(n.TokPos, len(n.Tok.String())))
case *ast.CallExpr:
children = append(children,
tok(n.Lparen, len("(")),
tok(n.Rparen, len(")")))
if n.Ellipsis != 0 {
children = append(children, tok(n.Ellipsis, len("...")))
}
case *ast.CaseClause:
if n.List == nil {
children = append(children,
tok(n.Case, len("default")))
} else {
children = append(children,
tok(n.Case, len("case")))
}
children = append(children, tok(n.Colon, len(":")))
case *ast.ChanType:
switch n.Dir {
case ast.RECV:
children = append(children, tok(n.Begin, len("<-chan")))
case ast.SEND:
children = append(children, tok(n.Begin, len("chan<-")))
case ast.RECV | ast.SEND:
children = append(children, tok(n.Begin, len("chan")))
}
case *ast.CommClause:
if n.Comm == nil {
children = append(children,
tok(n.Case, len("default")))
} else {
children = append(children,
tok(n.Case, len("case")))
}
children = append(children, tok(n.Colon, len(":")))
case *ast.Comment:
// nop
case *ast.CommentGroup:
// nop
case *ast.CompositeLit:
children = append(children,
tok(n.Lbrace, len("{")),
tok(n.Rbrace, len("{")))
case *ast.DeclStmt:
// nop
case *ast.DeferStmt:
children = append(children,
tok(n.Defer, len("defer")))
case *ast.Ellipsis:
children = append(children,
tok(n.Ellipsis, len("...")))
case *ast.EmptyStmt:
// nop
case *ast.ExprStmt:
// nop
case *ast.Field:
// TODO(adonovan): Field.{Doc,Comment,Tag}?
case *ast.FieldList:
children = append(children,
tok(n.Opening, len("(")),
tok(n.Closing, len(")")))
case *ast.File:
// TODO test: Doc
children = append(children,
tok(n.Package, len("package")))
case *ast.ForStmt:
children = append(children,
tok(n.For, len("for")))
case *ast.FuncDecl:
// TODO(adonovan): FuncDecl.Comment?
// Uniquely, FuncDecl breaks the invariant that
// preorder traversal yields tokens in lexical order:
// in fact, FuncDecl.Recv precedes FuncDecl.Type.Func.
//
// As a workaround, we inline the case for FuncType
// here and order things correctly.
//
children = nil // discard ast.Walk(FuncDecl) info subtrees
children = append(children, tok(n.Type.Func, len("func")))
if n.Recv != nil {
children = append(children, n.Recv)
}
children = append(children, n.Name)
if n.Type.Params != nil {
children = append(children, n.Type.Params)
}
if n.Type.Results != nil {
children = append(children, n.Type.Results)
}
if n.Body != nil {
children = append(children, n.Body)
}
case *ast.FuncLit:
// nop
case *ast.FuncType:
if n.Func != 0 {
children = append(children,
tok(n.Func, len("func")))
}
case *ast.GenDecl:
children = append(children,
tok(n.TokPos, len(n.Tok.String())))
if n.Lparen != 0 {
children = append(children,
tok(n.Lparen, len("(")),
tok(n.Rparen, len(")")))
}
case *ast.GoStmt:
children = append(children,
tok(n.Go, len("go")))
case *ast.Ident:
children = append(children,
tok(n.NamePos, len(n.Name)))
case *ast.IfStmt:
children = append(children,
tok(n.If, len("if")))
case *ast.ImportSpec:
// TODO(adonovan): ImportSpec.{Doc,EndPos}?
case *ast.IncDecStmt:
children = append(children,
tok(n.TokPos, len(n.Tok.String())))
case *ast.IndexExpr:
children = append(children,
tok(n.Lbrack, len("{")),
tok(n.Rbrack, len("}")))
case *ast.InterfaceType:
children = append(children,
tok(n.Interface, len("interface")))
case *ast.KeyValueExpr:
children = append(children,
tok(n.Colon, len(":")))
case *ast.LabeledStmt:
children = append(children,
tok(n.Colon, len(":")))
case *ast.MapType:
children = append(children,
tok(n.Map, len("map")))
case *ast.ParenExpr:
children = append(children,
tok(n.Lparen, len("(")),
tok(n.Rparen, len(")")))
case *ast.RangeStmt:
children = append(children,
tok(n.For, len("for")),
tok(n.TokPos, len(n.Tok.String())))
case *ast.ReturnStmt:
children = append(children,
tok(n.Return, len("return")))
case *ast.SelectStmt:
children = append(children,
tok(n.Select, len("select")))
case *ast.SelectorExpr:
// nop
case *ast.SendStmt:
children = append(children,
tok(n.Arrow, len("<-")))
case *ast.SliceExpr:
children = append(children,
tok(n.Lbrack, len("[")),
tok(n.Rbrack, len("]")))
case *ast.StarExpr:
children = append(children, tok(n.Star, len("*")))
case *ast.StructType:
children = append(children, tok(n.Struct, len("struct")))
case *ast.SwitchStmt:
children = append(children, tok(n.Switch, len("switch")))
case *ast.TypeAssertExpr:
children = append(children,
tok(n.Lparen-1, len(".")),
tok(n.Lparen, len("(")),
tok(n.Rparen, len(")")))
case *ast.TypeSpec:
// TODO(adonovan): TypeSpec.{Doc,Comment}?
case *ast.TypeSwitchStmt:
children = append(children, tok(n.Switch, len("switch")))
case *ast.UnaryExpr:
children = append(children, tok(n.OpPos, len(n.Op.String())))
case *ast.ValueSpec:
// TODO(adonovan): ValueSpec.{Doc,Comment}?
case *ast.BadDecl, *ast.BadExpr, *ast.BadStmt:
// nop
}
// TODO(adonovan): opt: merge the logic of ast.Inspect() into
// the switch above so we can make interleaved callbacks for
// both Nodes and Tokens in the right order and avoid the need
// to sort.
sort.Sort(byPos(children))
return children
}
type byPos []ast.Node
func (sl byPos) Len() int {
return len(sl)
}
func (sl byPos) Less(i, j int) bool {
return sl[i].Pos() < sl[j].Pos()
}
func (sl byPos) Swap(i, j int) {
sl[i], sl[j] = sl[j], sl[i]
}
// NodeDescription returns a description of the concrete type of n suitable
// for a user interface.
//
// TODO(adonovan): in some cases (e.g. Field, FieldList, Ident,
// StarExpr) we could be much more specific given the path to the AST
// root. Perhaps we should do that.
//
func NodeDescription(n ast.Node) string {
switch n := n.(type) {
case *ast.ArrayType:
return "array type"
case *ast.AssignStmt:
return "assignment"
case *ast.BadDecl:
return "bad declaration"
case *ast.BadExpr:
return "bad expression"
case *ast.BadStmt:
return "bad statement"
case *ast.BasicLit:
return "basic literal"
case *ast.BinaryExpr:
return fmt.Sprintf("binary %s operation", n.Op)
case *ast.BlockStmt:
return "block"
case *ast.BranchStmt:
switch n.Tok {
case token.BREAK:
return "break statement"
case token.CONTINUE:
return "continue statement"
case token.GOTO:
return "goto statement"
case token.FALLTHROUGH:
return "fall-through statement"
}
case *ast.CallExpr:
return "function call (or conversion)"
case *ast.CaseClause:
return "case clause"
case *ast.ChanType:
return "channel type"
case *ast.CommClause:
return "communication clause"
case *ast.Comment:
return "comment"
case *ast.CommentGroup:
return "comment group"
case *ast.CompositeLit:
return "composite literal"
case *ast.DeclStmt:
return NodeDescription(n.Decl) + " statement"
case *ast.DeferStmt:
return "defer statement"
case *ast.Ellipsis:
return "ellipsis"
case *ast.EmptyStmt:
return "empty statement"
case *ast.ExprStmt:
return "expression statement"
case *ast.Field:
// Can be any of these:
// struct {x, y int} -- struct field(s)
// struct {T} -- anon struct field
// interface {I} -- interface embedding
// interface {f()} -- interface method
// func (A) func(B) C -- receiver, param(s), result(s)
return "field/method/parameter"
case *ast.FieldList:
return "field/method/parameter list"
case *ast.File:
return "source file"
case *ast.ForStmt:
return "for loop"
case *ast.FuncDecl:
return "function declaration"
case *ast.FuncLit:
return "function literal"
case *ast.FuncType:
return "function type"
case *ast.GenDecl:
switch n.Tok {
case token.IMPORT:
return "import declaration"
case token.CONST:
return "constant declaration"
case token.TYPE:
return "type declaration"
case token.VAR:
return "variable declaration"
}
case *ast.GoStmt:
return "go statement"
case *ast.Ident:
return "identifier"
case *ast.IfStmt:
return "if statement"
case *ast.ImportSpec:
return "import specification"
case *ast.IncDecStmt:
if n.Tok == token.INC {
return "increment statement"
}
return "decrement statement"
case *ast.IndexExpr:
return "index expression"
case *ast.InterfaceType:
return "interface type"
case *ast.KeyValueExpr:
return "key/value association"
case *ast.LabeledStmt:
return "statement label"
case *ast.MapType:
return "map type"
case *ast.Package:
return "package"
case *ast.ParenExpr:
return "parenthesized " + NodeDescription(n.X)
case *ast.RangeStmt:
return "range loop"
case *ast.ReturnStmt:
return "return statement"
case *ast.SelectStmt:
return "select statement"
case *ast.SelectorExpr:
return "selector"
case *ast.SendStmt:
return "channel send"
case *ast.SliceExpr:
return "slice expression"
case *ast.StarExpr:
return "*-operation" // load/store expr or pointer type
case *ast.StructType:
return "struct type"
case *ast.SwitchStmt:
return "switch statement"
case *ast.TypeAssertExpr:
return "type assertion"
case *ast.TypeSpec:
return "type specification"
case *ast.TypeSwitchStmt:
return "type switch"
case *ast.UnaryExpr:
return fmt.Sprintf("unary %s operation", n.Op)
case *ast.ValueSpec:
return "value specification"
}
panic(fmt.Sprintf("unexpected node type: %T", n))
}

View File

@ -1,400 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package astutil contains common utilities for working with the Go AST.
package astutil
import (
"fmt"
"go/ast"
"go/token"
"strconv"
"strings"
)
// AddImport adds the import path to the file f, if absent.
func AddImport(fset *token.FileSet, f *ast.File, ipath string) (added bool) {
return AddNamedImport(fset, f, "", ipath)
}
// AddNamedImport adds the import path to the file f, if absent.
// If name is not empty, it is used to rename the import.
//
// For example, calling
// AddNamedImport(fset, f, "pathpkg", "path")
// adds
// import pathpkg "path"
func AddNamedImport(fset *token.FileSet, f *ast.File, name, ipath string) (added bool) {
if imports(f, ipath) {
return false
}
newImport := &ast.ImportSpec{
Path: &ast.BasicLit{
Kind: token.STRING,
Value: strconv.Quote(ipath),
},
}
if name != "" {
newImport.Name = &ast.Ident{Name: name}
}
// Find an import decl to add to.
// The goal is to find an existing import
// whose import path has the longest shared
// prefix with ipath.
var (
bestMatch = -1 // length of longest shared prefix
lastImport = -1 // index in f.Decls of the file's final import decl
impDecl *ast.GenDecl // import decl containing the best match
impIndex = -1 // spec index in impDecl containing the best match
)
for i, decl := range f.Decls {
gen, ok := decl.(*ast.GenDecl)
if ok && gen.Tok == token.IMPORT {
lastImport = i
// Do not add to import "C", to avoid disrupting the
// association with its doc comment, breaking cgo.
if declImports(gen, "C") {
continue
}
// Match an empty import decl if that's all that is available.
if len(gen.Specs) == 0 && bestMatch == -1 {
impDecl = gen
}
// Compute longest shared prefix with imports in this group.
for j, spec := range gen.Specs {
impspec := spec.(*ast.ImportSpec)
n := matchLen(importPath(impspec), ipath)
if n > bestMatch {
bestMatch = n
impDecl = gen
impIndex = j
}
}
}
}
// If no import decl found, add one after the last import.
if impDecl == nil {
impDecl = &ast.GenDecl{
Tok: token.IMPORT,
}
if lastImport >= 0 {
impDecl.TokPos = f.Decls[lastImport].End()
} else {
// There are no existing imports.
// Our new import goes after the package declaration and after
// the comment, if any, that starts on the same line as the
// package declaration.
impDecl.TokPos = f.Package
file := fset.File(f.Package)
pkgLine := file.Line(f.Package)
for _, c := range f.Comments {
if file.Line(c.Pos()) > pkgLine {
break
}
impDecl.TokPos = c.End()
}
}
f.Decls = append(f.Decls, nil)
copy(f.Decls[lastImport+2:], f.Decls[lastImport+1:])
f.Decls[lastImport+1] = impDecl
}
// Insert new import at insertAt.
insertAt := 0
if impIndex >= 0 {
// insert after the found import
insertAt = impIndex + 1
}
impDecl.Specs = append(impDecl.Specs, nil)
copy(impDecl.Specs[insertAt+1:], impDecl.Specs[insertAt:])
impDecl.Specs[insertAt] = newImport
pos := impDecl.Pos()
if insertAt > 0 {
// If there is a comment after an existing import, preserve the comment
// position by adding the new import after the comment.
if spec, ok := impDecl.Specs[insertAt-1].(*ast.ImportSpec); ok && spec.Comment != nil {
pos = spec.Comment.End()
} else {
// Assign same position as the previous import,
// so that the sorter sees it as being in the same block.
pos = impDecl.Specs[insertAt-1].Pos()
}
}
if newImport.Name != nil {
newImport.Name.NamePos = pos
}
newImport.Path.ValuePos = pos
newImport.EndPos = pos
// Clean up parens. impDecl contains at least one spec.
if len(impDecl.Specs) == 1 {
// Remove unneeded parens.
impDecl.Lparen = token.NoPos
} else if !impDecl.Lparen.IsValid() {
// impDecl needs parens added.
impDecl.Lparen = impDecl.Specs[0].Pos()
}
f.Imports = append(f.Imports, newImport)
if len(f.Decls) <= 1 {
return true
}
// Merge all the import declarations into the first one.
var first *ast.GenDecl
for i, decl := range f.Decls {
gen, ok := decl.(*ast.GenDecl)
if !ok || gen.Tok != token.IMPORT || declImports(gen, "C") {
continue
}
if first == nil {
first = gen
continue // Don't touch the first one.
}
// Move the imports of the other import declaration to the first one.
for _, spec := range gen.Specs {
spec.(*ast.ImportSpec).Path.ValuePos = first.Pos()
first.Specs = append(first.Specs, spec)
}
f.Decls = append(f.Decls[:i], f.Decls[i+1:]...)
}
return true
}
// DeleteImport deletes the import path from the file f, if present.
func DeleteImport(fset *token.FileSet, f *ast.File, path string) (deleted bool) {
return DeleteNamedImport(fset, f, "", path)
}
// DeleteNamedImport deletes the import with the given name and path from the file f, if present.
func DeleteNamedImport(fset *token.FileSet, f *ast.File, name, path string) (deleted bool) {
var delspecs []*ast.ImportSpec
// Find the import nodes that import path, if any.
for i := 0; i < len(f.Decls); i++ {
decl := f.Decls[i]
gen, ok := decl.(*ast.GenDecl)
if !ok || gen.Tok != token.IMPORT {
continue
}
for j := 0; j < len(gen.Specs); j++ {
spec := gen.Specs[j]
impspec := spec.(*ast.ImportSpec)
if impspec.Name == nil && name != "" {
continue
}
if impspec.Name != nil && impspec.Name.Name != name {
continue
}
if importPath(impspec) != path {
continue
}
// We found an import spec that imports path.
// Delete it.
delspecs = append(delspecs, impspec)
deleted = true
copy(gen.Specs[j:], gen.Specs[j+1:])
gen.Specs = gen.Specs[:len(gen.Specs)-1]
// If this was the last import spec in this decl,
// delete the decl, too.
if len(gen.Specs) == 0 {
copy(f.Decls[i:], f.Decls[i+1:])
f.Decls = f.Decls[:len(f.Decls)-1]
i--
break
} else if len(gen.Specs) == 1 {
gen.Lparen = token.NoPos // drop parens
}
if j > 0 {
lastImpspec := gen.Specs[j-1].(*ast.ImportSpec)
lastLine := fset.Position(lastImpspec.Path.ValuePos).Line
line := fset.Position(impspec.Path.ValuePos).Line
// We deleted an entry but now there may be
// a blank line-sized hole where the import was.
if line-lastLine > 1 {
// There was a blank line immediately preceding the deleted import,
// so there's no need to close the hole.
// Do nothing.
} else {
// There was no blank line. Close the hole.
fset.File(gen.Rparen).MergeLine(line)
}
}
j--
}
}
// Delete them from f.Imports.
for i := 0; i < len(f.Imports); i++ {
imp := f.Imports[i]
for j, del := range delspecs {
if imp == del {
copy(f.Imports[i:], f.Imports[i+1:])
f.Imports = f.Imports[:len(f.Imports)-1]
copy(delspecs[j:], delspecs[j+1:])
delspecs = delspecs[:len(delspecs)-1]
i--
break
}
}
}
if len(delspecs) > 0 {
panic(fmt.Sprintf("deleted specs from Decls but not Imports: %v", delspecs))
}
return
}
// RewriteImport rewrites any import of path oldPath to path newPath.
func RewriteImport(fset *token.FileSet, f *ast.File, oldPath, newPath string) (rewrote bool) {
for _, imp := range f.Imports {
if importPath(imp) == oldPath {
rewrote = true
// record old End, because the default is to compute
// it using the length of imp.Path.Value.
imp.EndPos = imp.End()
imp.Path.Value = strconv.Quote(newPath)
}
}
return
}
// UsesImport reports whether a given import is used.
func UsesImport(f *ast.File, path string) (used bool) {
spec := importSpec(f, path)
if spec == nil {
return
}
name := spec.Name.String()
switch name {
case "<nil>":
// If the package name is not explicitly specified,
// make an educated guess. This is not guaranteed to be correct.
lastSlash := strings.LastIndex(path, "/")
if lastSlash == -1 {
name = path
} else {
name = path[lastSlash+1:]
}
case "_", ".":
// Not sure if this import is used - err on the side of caution.
return true
}
ast.Walk(visitFn(func(n ast.Node) {
sel, ok := n.(*ast.SelectorExpr)
if ok && isTopName(sel.X, name) {
used = true
}
}), f)
return
}
type visitFn func(node ast.Node)
func (fn visitFn) Visit(node ast.Node) ast.Visitor {
fn(node)
return fn
}
// imports returns true if f imports path.
func imports(f *ast.File, path string) bool {
return importSpec(f, path) != nil
}
// importSpec returns the import spec if f imports path,
// or nil otherwise.
func importSpec(f *ast.File, path string) *ast.ImportSpec {
for _, s := range f.Imports {
if importPath(s) == path {
return s
}
}
return nil
}
// importPath returns the unquoted import path of s,
// or "" if the path is not properly quoted.
func importPath(s *ast.ImportSpec) string {
t, err := strconv.Unquote(s.Path.Value)
if err == nil {
return t
}
return ""
}
// declImports reports whether gen contains an import of path.
func declImports(gen *ast.GenDecl, path string) bool {
if gen.Tok != token.IMPORT {
return false
}
for _, spec := range gen.Specs {
impspec := spec.(*ast.ImportSpec)
if importPath(impspec) == path {
return true
}
}
return false
}
// matchLen returns the length of the longest path segment prefix shared by x and y.
func matchLen(x, y string) int {
n := 0
for i := 0; i < len(x) && i < len(y) && x[i] == y[i]; i++ {
if x[i] == '/' {
n++
}
}
return n
}
// isTopName returns true if n is a top-level unresolved identifier with the given name.
func isTopName(n ast.Expr, name string) bool {
id, ok := n.(*ast.Ident)
return ok && id.Name == name && id.Obj == nil
}
// Imports returns the file imports grouped by paragraph.
func Imports(fset *token.FileSet, f *ast.File) [][]*ast.ImportSpec {
var groups [][]*ast.ImportSpec
for _, decl := range f.Decls {
genDecl, ok := decl.(*ast.GenDecl)
if !ok || genDecl.Tok != token.IMPORT {
break
}
group := []*ast.ImportSpec{}
var lastLine int
for _, spec := range genDecl.Specs {
importSpec := spec.(*ast.ImportSpec)
pos := importSpec.Path.ValuePos
line := fset.Position(pos).Line
if lastLine > 0 && pos > 0 && line-lastLine > 1 {
groups = append(groups, group)
group = []*ast.ImportSpec{}
}
group = append(group, importSpec)
lastLine = line
}
groups = append(groups, group)
}
return groups
}

View File

@ -1,14 +0,0 @@
package astutil
import "go/ast"
// Unparen returns e with any enclosing parentheses stripped.
func Unparen(e ast.Expr) ast.Expr {
for {
p, ok := e.(*ast.ParenExpr)
if !ok {
return e
}
e = p.X
}
}

View File

@ -1,195 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package buildutil provides utilities related to the go/build
// package in the standard library.
//
// All I/O is done via the build.Context file system interface, which must
// be concurrency-safe.
package buildutil
import (
"go/build"
"os"
"path/filepath"
"sort"
"strings"
"sync"
)
// AllPackages returns the package path of each Go package in any source
// directory of the specified build context (e.g. $GOROOT or an element
// of $GOPATH). Errors are ignored. The results are sorted.
// All package paths are canonical, and thus may contain "/vendor/".
//
// The result may include import paths for directories that contain no
// *.go files, such as "archive" (in $GOROOT/src).
//
// All I/O is done via the build.Context file system interface,
// which must be concurrency-safe.
//
func AllPackages(ctxt *build.Context) []string {
var list []string
ForEachPackage(ctxt, func(pkg string, _ error) {
list = append(list, pkg)
})
sort.Strings(list)
return list
}
// ForEachPackage calls the found function with the package path of
// each Go package it finds in any source directory of the specified
// build context (e.g. $GOROOT or an element of $GOPATH).
// All package paths are canonical, and thus may contain "/vendor/".
//
// If the package directory exists but could not be read, the second
// argument to the found function provides the error.
//
// All I/O is done via the build.Context file system interface,
// which must be concurrency-safe.
//
func ForEachPackage(ctxt *build.Context, found func(importPath string, err error)) {
ch := make(chan item)
var wg sync.WaitGroup
for _, root := range ctxt.SrcDirs() {
root := root
wg.Add(1)
go func() {
allPackages(ctxt, root, ch)
wg.Done()
}()
}
go func() {
wg.Wait()
close(ch)
}()
// All calls to found occur in the caller's goroutine.
for i := range ch {
found(i.importPath, i.err)
}
}
type item struct {
importPath string
err error // (optional)
}
// We use a process-wide counting semaphore to limit
// the number of parallel calls to ReadDir.
var ioLimit = make(chan bool, 20)
func allPackages(ctxt *build.Context, root string, ch chan<- item) {
root = filepath.Clean(root) + string(os.PathSeparator)
var wg sync.WaitGroup
var walkDir func(dir string)
walkDir = func(dir string) {
// Avoid .foo, _foo, and testdata directory trees.
base := filepath.Base(dir)
if base == "" || base[0] == '.' || base[0] == '_' || base == "testdata" {
return
}
pkg := filepath.ToSlash(strings.TrimPrefix(dir, root))
// Prune search if we encounter any of these import paths.
switch pkg {
case "builtin":
return
}
ioLimit <- true
files, err := ReadDir(ctxt, dir)
<-ioLimit
if pkg != "" || err != nil {
ch <- item{pkg, err}
}
for _, fi := range files {
fi := fi
if fi.IsDir() {
wg.Add(1)
go func() {
walkDir(filepath.Join(dir, fi.Name()))
wg.Done()
}()
}
}
}
walkDir(root)
wg.Wait()
}
// ExpandPatterns returns the set of packages matched by patterns,
// which may have the following forms:
//
// golang.org/x/tools/cmd/guru # a single package
// golang.org/x/tools/... # all packages beneath dir
// ... # the entire workspace.
//
// Order is significant: a pattern preceded by '-' removes matching
// packages from the set. For example, these patterns match all encoding
// packages except encoding/xml:
//
// encoding/... -encoding/xml
//
func ExpandPatterns(ctxt *build.Context, patterns []string) map[string]bool {
// TODO(adonovan): support other features of 'go list':
// - "std"/"cmd"/"all" meta-packages
// - "..." not at the end of a pattern
// - relative patterns using "./" or "../" prefix
pkgs := make(map[string]bool)
doPkg := func(pkg string, neg bool) {
if neg {
delete(pkgs, pkg)
} else {
pkgs[pkg] = true
}
}
// Scan entire workspace if wildcards are present.
// TODO(adonovan): opt: scan only the necessary subtrees of the workspace.
var all []string
for _, arg := range patterns {
if strings.HasSuffix(arg, "...") {
all = AllPackages(ctxt)
break
}
}
for _, arg := range patterns {
if arg == "" {
continue
}
neg := arg[0] == '-'
if neg {
arg = arg[1:]
}
if arg == "..." {
// ... matches all packages
for _, pkg := range all {
doPkg(pkg, neg)
}
} else if dir := strings.TrimSuffix(arg, "/..."); dir != arg {
// dir/... matches all packages beneath dir
for _, pkg := range all {
if strings.HasPrefix(pkg, dir) &&
(len(pkg) == len(dir) || pkg[len(dir)] == '/') {
doPkg(pkg, neg)
}
}
} else {
// single package
doPkg(arg, neg)
}
}
return pkgs
}

View File

@ -1,108 +0,0 @@
package buildutil
import (
"fmt"
"go/build"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"sort"
"strings"
"time"
)
// FakeContext returns a build.Context for the fake file tree specified
// by pkgs, which maps package import paths to a mapping from file base
// names to contents.
//
// The fake Context has a GOROOT of "/go" and no GOPATH, and overrides
// the necessary file access methods to read from memory instead of the
// real file system.
//
// Unlike a real file tree, the fake one has only two levels---packages
// and files---so ReadDir("/go/src/") returns all packages under
// /go/src/ including, for instance, "math" and "math/big".
// ReadDir("/go/src/math/big") would return all the files in the
// "math/big" package.
//
func FakeContext(pkgs map[string]map[string]string) *build.Context {
clean := func(filename string) string {
f := path.Clean(filepath.ToSlash(filename))
// Removing "/go/src" while respecting segment
// boundaries has this unfortunate corner case:
if f == "/go/src" {
return ""
}
return strings.TrimPrefix(f, "/go/src/")
}
ctxt := build.Default // copy
ctxt.GOROOT = "/go"
ctxt.GOPATH = ""
ctxt.IsDir = func(dir string) bool {
dir = clean(dir)
if dir == "" {
return true // needed by (*build.Context).SrcDirs
}
return pkgs[dir] != nil
}
ctxt.ReadDir = func(dir string) ([]os.FileInfo, error) {
dir = clean(dir)
var fis []os.FileInfo
if dir == "" {
// enumerate packages
for importPath := range pkgs {
fis = append(fis, fakeDirInfo(importPath))
}
} else {
// enumerate files of package
for basename := range pkgs[dir] {
fis = append(fis, fakeFileInfo(basename))
}
}
sort.Sort(byName(fis))
return fis, nil
}
ctxt.OpenFile = func(filename string) (io.ReadCloser, error) {
filename = clean(filename)
dir, base := path.Split(filename)
content, ok := pkgs[path.Clean(dir)][base]
if !ok {
return nil, fmt.Errorf("file not found: %s", filename)
}
return ioutil.NopCloser(strings.NewReader(content)), nil
}
ctxt.IsAbsPath = func(path string) bool {
path = filepath.ToSlash(path)
// Don't rely on the default (filepath.Path) since on
// Windows, it reports virtual paths as non-absolute.
return strings.HasPrefix(path, "/")
}
return &ctxt
}
type byName []os.FileInfo
func (s byName) Len() int { return len(s) }
func (s byName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s byName) Less(i, j int) bool { return s[i].Name() < s[j].Name() }
type fakeFileInfo string
func (fi fakeFileInfo) Name() string { return string(fi) }
func (fakeFileInfo) Sys() interface{} { return nil }
func (fakeFileInfo) ModTime() time.Time { return time.Time{} }
func (fakeFileInfo) IsDir() bool { return false }
func (fakeFileInfo) Size() int64 { return 0 }
func (fakeFileInfo) Mode() os.FileMode { return 0644 }
type fakeDirInfo string
func (fd fakeDirInfo) Name() string { return string(fd) }
func (fakeDirInfo) Sys() interface{} { return nil }
func (fakeDirInfo) ModTime() time.Time { return time.Time{} }
func (fakeDirInfo) IsDir() bool { return true }
func (fakeDirInfo) Size() int64 { return 0 }
func (fakeDirInfo) Mode() os.FileMode { return 0755 }

View File

@ -1,75 +0,0 @@
package buildutil
// This logic was copied from stringsFlag from $GOROOT/src/cmd/go/build.go.
import "fmt"
const TagsFlagDoc = "a list of `build tags` to consider satisfied during the build. " +
"For more information about build tags, see the description of " +
"build constraints in the documentation for the go/build package"
// TagsFlag is an implementation of the flag.Value and flag.Getter interfaces that parses
// a flag value in the same manner as go build's -tags flag and
// populates a []string slice.
//
// See $GOROOT/src/go/build/doc.go for description of build tags.
// See $GOROOT/src/cmd/go/doc.go for description of 'go build -tags' flag.
//
// Example:
// flag.Var((*buildutil.TagsFlag)(&build.Default.BuildTags), "tags", buildutil.TagsFlagDoc)
type TagsFlag []string
func (v *TagsFlag) Set(s string) error {
var err error
*v, err = splitQuotedFields(s)
if *v == nil {
*v = []string{}
}
return err
}
func (v *TagsFlag) Get() interface{} { return *v }
func splitQuotedFields(s string) ([]string, error) {
// Split fields allowing '' or "" around elements.
// Quotes further inside the string do not count.
var f []string
for len(s) > 0 {
for len(s) > 0 && isSpaceByte(s[0]) {
s = s[1:]
}
if len(s) == 0 {
break
}
// Accepted quoted string. No unescaping inside.
if s[0] == '"' || s[0] == '\'' {
quote := s[0]
s = s[1:]
i := 0
for i < len(s) && s[i] != quote {
i++
}
if i >= len(s) {
return nil, fmt.Errorf("unterminated %c string", quote)
}
f = append(f, s[:i])
s = s[i+1:]
continue
}
i := 0
for i < len(s) && !isSpaceByte(s[i]) {
i++
}
f = append(f, s[:i])
s = s[i:]
}
return f, nil
}
func (v *TagsFlag) String() string {
return "<tagsFlag>"
}
func isSpaceByte(c byte) bool {
return c == ' ' || c == '\t' || c == '\n' || c == '\r'
}

View File

@ -1,167 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package buildutil
import (
"fmt"
"go/ast"
"go/build"
"go/parser"
"go/token"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime"
"strings"
)
// ParseFile behaves like parser.ParseFile,
// but uses the build context's file system interface, if any.
//
// If file is not absolute (as defined by IsAbsPath), the (dir, file)
// components are joined using JoinPath; dir must be absolute.
//
// The displayPath function, if provided, is used to transform the
// filename that will be attached to the ASTs.
//
// TODO(adonovan): call this from go/loader.parseFiles when the tree thaws.
//
func ParseFile(fset *token.FileSet, ctxt *build.Context, displayPath func(string) string, dir string, file string, mode parser.Mode) (*ast.File, error) {
if !IsAbsPath(ctxt, file) {
file = JoinPath(ctxt, dir, file)
}
rd, err := OpenFile(ctxt, file)
if err != nil {
return nil, err
}
defer rd.Close() // ignore error
if displayPath != nil {
file = displayPath(file)
}
return parser.ParseFile(fset, file, rd, mode)
}
// ContainingPackage returns the package containing filename.
//
// If filename is not absolute, it is interpreted relative to working directory dir.
// All I/O is via the build context's file system interface, if any.
//
// The '...Files []string' fields of the resulting build.Package are not
// populated (build.FindOnly mode).
//
// TODO(adonovan): call this from oracle when the tree thaws.
//
func ContainingPackage(ctxt *build.Context, dir, filename string) (*build.Package, error) {
if !IsAbsPath(ctxt, filename) {
filename = JoinPath(ctxt, dir, filename)
}
// We must not assume the file tree uses
// "/" always,
// `\` always,
// or os.PathSeparator (which varies by platform),
// but to make any progress, we are forced to assume that
// paths will not use `\` unless the PathSeparator
// is also `\`, thus we can rely on filepath.ToSlash for some sanity.
dirSlash := path.Dir(filepath.ToSlash(filename)) + "/"
// We assume that no source root (GOPATH[i] or GOROOT) contains any other.
for _, srcdir := range ctxt.SrcDirs() {
srcdirSlash := filepath.ToSlash(srcdir) + "/"
if dirHasPrefix(dirSlash, srcdirSlash) {
importPath := dirSlash[len(srcdirSlash) : len(dirSlash)-len("/")]
return ctxt.Import(importPath, dir, build.FindOnly)
}
}
return nil, fmt.Errorf("can't find package containing %s", filename)
}
// dirHasPrefix tests whether the directory dir begins with prefix.
func dirHasPrefix(dir, prefix string) bool {
if runtime.GOOS != "windows" {
return strings.HasPrefix(dir, prefix)
}
return len(dir) >= len(prefix) && strings.EqualFold(dir[:len(prefix)], prefix)
}
// -- Effective methods of file system interface -------------------------
// (go/build.Context defines these as methods, but does not export them.)
// TODO(adonovan): HasSubdir?
// FileExists returns true if the specified file exists,
// using the build context's file system interface.
func FileExists(ctxt *build.Context, path string) bool {
if ctxt.OpenFile != nil {
r, err := ctxt.OpenFile(path)
if err != nil {
return false
}
r.Close() // ignore error
return true
}
_, err := os.Stat(path)
return err == nil
}
// OpenFile behaves like os.Open,
// but uses the build context's file system interface, if any.
func OpenFile(ctxt *build.Context, path string) (io.ReadCloser, error) {
if ctxt.OpenFile != nil {
return ctxt.OpenFile(path)
}
return os.Open(path)
}
// IsAbsPath behaves like filepath.IsAbs,
// but uses the build context's file system interface, if any.
func IsAbsPath(ctxt *build.Context, path string) bool {
if ctxt.IsAbsPath != nil {
return ctxt.IsAbsPath(path)
}
return filepath.IsAbs(path)
}
// JoinPath behaves like filepath.Join,
// but uses the build context's file system interface, if any.
func JoinPath(ctxt *build.Context, path ...string) string {
if ctxt.JoinPath != nil {
return ctxt.JoinPath(path...)
}
return filepath.Join(path...)
}
// IsDir behaves like os.Stat plus IsDir,
// but uses the build context's file system interface, if any.
func IsDir(ctxt *build.Context, path string) bool {
if ctxt.IsDir != nil {
return ctxt.IsDir(path)
}
fi, err := os.Stat(path)
return err == nil && fi.IsDir()
}
// ReadDir behaves like ioutil.ReadDir,
// but uses the build context's file system interface, if any.
func ReadDir(ctxt *build.Context, path string) ([]os.FileInfo, error) {
if ctxt.ReadDir != nil {
return ctxt.ReadDir(path)
}
return ioutil.ReadDir(path)
}
// SplitPathList behaves like filepath.SplitList,
// but uses the build context's file system interface, if any.
func SplitPathList(ctxt *build.Context, s string) []string {
if ctxt.SplitPathList != nil {
return ctxt.SplitPathList(s)
}
return filepath.SplitList(s)
}

View File

@ -1,209 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build go1.5
package loader
// This file handles cgo preprocessing of files containing `import "C"`.
//
// DESIGN
//
// The approach taken is to run the cgo processor on the package's
// CgoFiles and parse the output, faking the filenames of the
// resulting ASTs so that the synthetic file containing the C types is
// called "C" (e.g. "~/go/src/net/C") and the preprocessed files
// have their original names (e.g. "~/go/src/net/cgo_unix.go"),
// not the names of the actual temporary files.
//
// The advantage of this approach is its fidelity to 'go build'. The
// downside is that the token.Position.Offset for each AST node is
// incorrect, being an offset within the temporary file. Line numbers
// should still be correct because of the //line comments.
//
// The logic of this file is mostly plundered from the 'go build'
// tool, which also invokes the cgo preprocessor.
//
//
// REJECTED ALTERNATIVE
//
// An alternative approach that we explored is to extend go/types'
// Importer mechanism to provide the identity of the importing package
// so that each time `import "C"` appears it resolves to a different
// synthetic package containing just the objects needed in that case.
// The loader would invoke cgo but parse only the cgo_types.go file
// defining the package-level objects, discarding the other files
// resulting from preprocessing.
//
// The benefit of this approach would have been that source-level
// syntax information would correspond exactly to the original cgo
// file, with no preprocessing involved, making source tools like
// godoc, oracle, and eg happy. However, the approach was rejected
// due to the additional complexity it would impose on go/types. (It
// made for a beautiful demo, though.)
//
// cgo files, despite their *.go extension, are not legal Go source
// files per the specification since they may refer to unexported
// members of package "C" such as C.int. Also, a function such as
// C.getpwent has in effect two types, one matching its C type and one
// which additionally returns (errno C.int). The cgo preprocessor
// uses name mangling to distinguish these two functions in the
// processed code, but go/types would need to duplicate this logic in
// its handling of function calls, analogous to the treatment of map
// lookups in which y=m[k] and y,ok=m[k] are both legal.
import (
"fmt"
"go/ast"
"go/build"
"go/parser"
"go/token"
"io/ioutil"
"log"
"os"
"os/exec"
"path/filepath"
"regexp"
"strings"
)
// processCgoFiles invokes the cgo preprocessor on bp.CgoFiles, parses
// the output and returns the resulting ASTs.
//
func processCgoFiles(bp *build.Package, fset *token.FileSet, DisplayPath func(path string) string, mode parser.Mode) ([]*ast.File, error) {
tmpdir, err := ioutil.TempDir("", strings.Replace(bp.ImportPath, "/", "_", -1)+"_C")
if err != nil {
return nil, err
}
defer os.RemoveAll(tmpdir)
pkgdir := bp.Dir
if DisplayPath != nil {
pkgdir = DisplayPath(pkgdir)
}
cgoFiles, cgoDisplayFiles, err := runCgo(bp, pkgdir, tmpdir)
if err != nil {
return nil, err
}
var files []*ast.File
for i := range cgoFiles {
rd, err := os.Open(cgoFiles[i])
if err != nil {
return nil, err
}
display := filepath.Join(bp.Dir, cgoDisplayFiles[i])
f, err := parser.ParseFile(fset, display, rd, mode)
rd.Close()
if err != nil {
return nil, err
}
files = append(files, f)
}
return files, nil
}
var cgoRe = regexp.MustCompile(`[/\\:]`)
// runCgo invokes the cgo preprocessor on bp.CgoFiles and returns two
// lists of files: the resulting processed files (in temporary
// directory tmpdir) and the corresponding names of the unprocessed files.
//
// runCgo is adapted from (*builder).cgo in
// $GOROOT/src/cmd/go/build.go, but these features are unsupported:
// Objective C, CGOPKGPATH, CGO_FLAGS.
//
func runCgo(bp *build.Package, pkgdir, tmpdir string) (files, displayFiles []string, err error) {
cgoCPPFLAGS, _, _, _ := cflags(bp, true)
_, cgoexeCFLAGS, _, _ := cflags(bp, false)
if len(bp.CgoPkgConfig) > 0 {
pcCFLAGS, err := pkgConfigFlags(bp)
if err != nil {
return nil, nil, err
}
cgoCPPFLAGS = append(cgoCPPFLAGS, pcCFLAGS...)
}
// Allows including _cgo_export.h from .[ch] files in the package.
cgoCPPFLAGS = append(cgoCPPFLAGS, "-I", tmpdir)
// _cgo_gotypes.go (displayed "C") contains the type definitions.
files = append(files, filepath.Join(tmpdir, "_cgo_gotypes.go"))
displayFiles = append(displayFiles, "C")
for _, fn := range bp.CgoFiles {
// "foo.cgo1.go" (displayed "foo.go") is the processed Go source.
f := cgoRe.ReplaceAllString(fn[:len(fn)-len("go")], "_")
files = append(files, filepath.Join(tmpdir, f+"cgo1.go"))
displayFiles = append(displayFiles, fn)
}
var cgoflags []string
if bp.Goroot && bp.ImportPath == "runtime/cgo" {
cgoflags = append(cgoflags, "-import_runtime_cgo=false")
}
if bp.Goroot && bp.ImportPath == "runtime/race" || bp.ImportPath == "runtime/cgo" {
cgoflags = append(cgoflags, "-import_syscall=false")
}
args := stringList(
"go", "tool", "cgo", "-objdir", tmpdir, cgoflags, "--",
cgoCPPFLAGS, cgoexeCFLAGS, bp.CgoFiles,
)
if false {
log.Printf("Running cgo for package %q: %s (dir=%s)", bp.ImportPath, args, pkgdir)
}
cmd := exec.Command(args[0], args[1:]...)
cmd.Dir = pkgdir
cmd.Stdout = os.Stderr
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return nil, nil, fmt.Errorf("cgo failed: %s: %s", args, err)
}
return files, displayFiles, nil
}
// -- unmodified from 'go build' ---------------------------------------
// Return the flags to use when invoking the C or C++ compilers, or cgo.
func cflags(p *build.Package, def bool) (cppflags, cflags, cxxflags, ldflags []string) {
var defaults string
if def {
defaults = "-g -O2"
}
cppflags = stringList(envList("CGO_CPPFLAGS", ""), p.CgoCPPFLAGS)
cflags = stringList(envList("CGO_CFLAGS", defaults), p.CgoCFLAGS)
cxxflags = stringList(envList("CGO_CXXFLAGS", defaults), p.CgoCXXFLAGS)
ldflags = stringList(envList("CGO_LDFLAGS", defaults), p.CgoLDFLAGS)
return
}
// envList returns the value of the given environment variable broken
// into fields, using the default value when the variable is empty.
func envList(key, def string) []string {
v := os.Getenv(key)
if v == "" {
v = def
}
return strings.Fields(v)
}
// stringList's arguments should be a sequence of string or []string values.
// stringList flattens them into a single []string.
func stringList(args ...interface{}) []string {
var x []string
for _, arg := range args {
switch arg := arg.(type) {
case []string:
x = append(x, arg...)
case string:
x = append(x, arg)
default:
panic("stringList: invalid argument")
}
}
return x
}

View File

@ -1,39 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package loader
import (
"errors"
"fmt"
"go/build"
"os/exec"
"strings"
)
// pkgConfig runs pkg-config with the specified arguments and returns the flags it prints.
func pkgConfig(mode string, pkgs []string) (flags []string, err error) {
cmd := exec.Command("pkg-config", append([]string{mode}, pkgs...)...)
out, err := cmd.CombinedOutput()
if err != nil {
s := fmt.Sprintf("%s failed: %v", strings.Join(cmd.Args, " "), err)
if len(out) > 0 {
s = fmt.Sprintf("%s: %s", s, out)
}
return nil, errors.New(s)
}
if len(out) > 0 {
flags = strings.Fields(string(out))
}
return
}
// pkgConfigFlags calls pkg-config if needed and returns the cflags
// needed to build the package.
func pkgConfigFlags(p *build.Package) (cflags []string, err error) {
if len(p.CgoPkgConfig) == 0 {
return nil, nil
}
return pkgConfig("--cflags", p.CgoPkgConfig)
}

View File

@ -1,205 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package loader loads a complete Go program from source code, parsing
// and type-checking the initial packages plus their transitive closure
// of dependencies. The ASTs and the derived facts are retained for
// later use.
//
// THIS INTERFACE IS EXPERIMENTAL AND IS LIKELY TO CHANGE.
//
// The package defines two primary types: Config, which specifies a
// set of initial packages to load and various other options; and
// Program, which is the result of successfully loading the packages
// specified by a configuration.
//
// The configuration can be set directly, but *Config provides various
// convenience methods to simplify the common cases, each of which can
// be called any number of times. Finally, these are followed by a
// call to Load() to actually load and type-check the program.
//
// var conf loader.Config
//
// // Use the command-line arguments to specify
// // a set of initial packages to load from source.
// // See FromArgsUsage for help.
// rest, err := conf.FromArgs(os.Args[1:], wantTests)
//
// // Parse the specified files and create an ad hoc package with path "foo".
// // All files must have the same 'package' declaration.
// conf.CreateFromFilenames("foo", "foo.go", "bar.go")
//
// // Create an ad hoc package with path "foo" from
// // the specified already-parsed files.
// // All ASTs must have the same 'package' declaration.
// conf.CreateFromFiles("foo", parsedFiles)
//
// // Add "runtime" to the set of packages to be loaded.
// conf.Import("runtime")
//
// // Adds "fmt" and "fmt_test" to the set of packages
// // to be loaded. "fmt" will include *_test.go files.
// conf.ImportWithTests("fmt")
//
// // Finally, load all the packages specified by the configuration.
// prog, err := conf.Load()
//
// See examples_test.go for examples of API usage.
//
//
// CONCEPTS AND TERMINOLOGY
//
// The WORKSPACE is the set of packages accessible to the loader. The
// workspace is defined by Config.Build, a *build.Context. The
// default context treats subdirectories of $GOROOT and $GOPATH as
// packages, but this behavior may be overridden.
//
// An AD HOC package is one specified as a set of source files on the
// command line. In the simplest case, it may consist of a single file
// such as $GOROOT/src/net/http/triv.go.
//
// EXTERNAL TEST packages are those comprised of a set of *_test.go
// files all with the same 'package foo_test' declaration, all in the
// same directory. (go/build.Package calls these files XTestFiles.)
//
// An IMPORTABLE package is one that can be referred to by some import
// spec. Every importable package is uniquely identified by its
// PACKAGE PATH or just PATH, a string such as "fmt", "encoding/json",
// or "cmd/vendor/golang.org/x/arch/x86/x86asm". A package path
// typically denotes a subdirectory of the workspace.
//
// An import declaration uses an IMPORT PATH to refer to a package.
// Most import declarations use the package path as the import path.
//
// Due to VENDORING (https://golang.org/s/go15vendor), the
// interpretation of an import path may depend on the directory in which
// it appears. To resolve an import path to a package path, go/build
// must search the enclosing directories for a subdirectory named
// "vendor".
//
// ad hoc packages and external test packages are NON-IMPORTABLE. The
// path of an ad hoc package is inferred from the package
// declarations of its files and is therefore not a unique package key.
// For example, Config.CreatePkgs may specify two initial ad hoc
// packages, both with path "main".
//
// An AUGMENTED package is an importable package P plus all the
// *_test.go files with same 'package foo' declaration as P.
// (go/build.Package calls these files TestFiles.)
//
// The INITIAL packages are those specified in the configuration. A
// DEPENDENCY is a package loaded to satisfy an import in an initial
// package or another dependency.
//
package loader
// IMPLEMENTATION NOTES
//
// 'go test', in-package test files, and import cycles
// ---------------------------------------------------
//
// An external test package may depend upon members of the augmented
// package that are not in the unaugmented package, such as functions
// that expose internals. (See bufio/export_test.go for an example.)
// So, the loader must ensure that for each external test package
// it loads, it also augments the corresponding non-test package.
//
// The import graph over n unaugmented packages must be acyclic; the
// import graph over n-1 unaugmented packages plus one augmented
// package must also be acyclic. ('go test' relies on this.) But the
// import graph over n augmented packages may contain cycles.
//
// First, all the (unaugmented) non-test packages and their
// dependencies are imported in the usual way; the loader reports an
// error if it detects an import cycle.
//
// Then, each package P for which testing is desired is augmented by
// the list P' of its in-package test files, by calling
// (*types.Checker).Files. This arrangement ensures that P' may
// reference definitions within P, but P may not reference definitions
// within P'. Furthermore, P' may import any other package, including
// ones that depend upon P, without an import cycle error.
//
// Consider two packages A and B, both of which have lists of
// in-package test files we'll call A' and B', and which have the
// following import graph edges:
// B imports A
// B' imports A
// A' imports B
// This last edge would be expected to create an error were it not
// for the special type-checking discipline above.
// Cycles of size greater than two are possible. For example:
// compress/bzip2/bzip2_test.go (package bzip2) imports "io/ioutil"
// io/ioutil/tempfile_test.go (package ioutil) imports "regexp"
// regexp/exec_test.go (package regexp) imports "compress/bzip2"
//
//
// Concurrency
// -----------
//
// Let us define the import dependency graph as follows. Each node is a
// list of files passed to (Checker).Files at once. Many of these lists
// are the production code of an importable Go package, so those nodes
// are labelled by the package's path. The remaining nodes are
// ad hoc packages and lists of in-package *_test.go files that augment
// an importable package; those nodes have no label.
//
// The edges of the graph represent import statements appearing within a
// file. An edge connects a node (a list of files) to the node it
// imports, which is importable and thus always labelled.
//
// Loading is controlled by this dependency graph.
//
// To reduce I/O latency, we start loading a package's dependencies
// asynchronously as soon as we've parsed its files and enumerated its
// imports (scanImports). This performs a preorder traversal of the
// import dependency graph.
//
// To exploit hardware parallelism, we type-check unrelated packages in
// parallel, where "unrelated" means not ordered by the partial order of
// the import dependency graph.
//
// We use a concurrency-safe non-blocking cache (importer.imported) to
// record the results of type-checking, whether success or failure. An
// entry is created in this cache by startLoad the first time the
// package is imported. The first goroutine to request an entry becomes
// responsible for completing the task and broadcasting completion to
// subsequent requestors, which block until then.
//
// Type checking occurs in (parallel) postorder: we cannot type-check a
// set of files until we have loaded and type-checked all of their
// immediate dependencies (and thus all of their transitive
// dependencies). If the input were guaranteed free of import cycles,
// this would be trivial: we could simply wait for completion of the
// dependencies and then invoke the typechecker.
//
// But as we saw in the 'go test' section above, some cycles in the
// import graph over packages are actually legal, so long as the
// cycle-forming edge originates in the in-package test files that
// augment the package. This explains why the nodes of the import
// dependency graph are not packages, but lists of files: the unlabelled
// nodes avoid the cycles. Consider packages A and B where B imports A
// and A's in-package tests AT import B. The naively constructed import
// graph over packages would contain a cycle (A+AT) --> B --> (A+AT) but
// the graph over lists of files is AT --> B --> A, where AT is an
// unlabelled node.
//
// Awaiting completion of the dependencies in a cyclic graph would
// deadlock, so we must materialize the import dependency graph (as
// importer.graph) and check whether each import edge forms a cycle. If
// x imports y, and the graph already contains a path from y to x, then
// there is an import cycle, in which case the processing of x must not
// wait for the completion of processing of y.
//
// When the type-checker makes a callback (doImport) to the loader for a
// given import edge, there are two possible cases. In the normal case,
// the dependency has already been completely type-checked; doImport
// does a cache lookup and returns it. In the cyclic case, the entry in
// the cache is still necessarily incomplete, indicating a cycle. We
// perform the cycle check again to obtain the error message, and return
// the error.
//
// The result of using concurrency is about a 2.5x speedup for stdlib_test.
// TODO(adonovan): overhaul the package documentation.

View File

@ -1,13 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build go1.6
package loader
import "go/build"
func init() {
ignoreVendor = build.IgnoreVendor
}

View File

@ -1,124 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package loader
import (
"go/ast"
"go/build"
"go/parser"
"go/token"
"io"
"os"
"strconv"
"sync"
"golang.org/x/tools/go/buildutil"
)
// We use a counting semaphore to limit
// the number of parallel I/O calls per process.
var ioLimit = make(chan bool, 10)
// parseFiles parses the Go source files within directory dir and
// returns the ASTs of the ones that could be at least partially parsed,
// along with a list of I/O and parse errors encountered.
//
// I/O is done via ctxt, which may specify a virtual file system.
// displayPath is used to transform the filenames attached to the ASTs.
//
func parseFiles(fset *token.FileSet, ctxt *build.Context, displayPath func(string) string, dir string, files []string, mode parser.Mode) ([]*ast.File, []error) {
if displayPath == nil {
displayPath = func(path string) string { return path }
}
var wg sync.WaitGroup
n := len(files)
parsed := make([]*ast.File, n)
errors := make([]error, n)
for i, file := range files {
if !buildutil.IsAbsPath(ctxt, file) {
file = buildutil.JoinPath(ctxt, dir, file)
}
wg.Add(1)
go func(i int, file string) {
ioLimit <- true // wait
defer func() {
wg.Done()
<-ioLimit // signal
}()
var rd io.ReadCloser
var err error
if ctxt.OpenFile != nil {
rd, err = ctxt.OpenFile(file)
} else {
rd, err = os.Open(file)
}
if err != nil {
errors[i] = err // open failed
return
}
// ParseFile may return both an AST and an error.
parsed[i], errors[i] = parser.ParseFile(fset, displayPath(file), rd, mode)
rd.Close()
}(i, file)
}
wg.Wait()
// Eliminate nils, preserving order.
var o int
for _, f := range parsed {
if f != nil {
parsed[o] = f
o++
}
}
parsed = parsed[:o]
o = 0
for _, err := range errors {
if err != nil {
errors[o] = err
o++
}
}
errors = errors[:o]
return parsed, errors
}
// scanImports returns the set of all import paths from all
// import specs in the specified files.
func scanImports(files []*ast.File) map[string]bool {
imports := make(map[string]bool)
for _, f := range files {
for _, decl := range f.Decls {
if decl, ok := decl.(*ast.GenDecl); ok && decl.Tok == token.IMPORT {
for _, spec := range decl.Specs {
spec := spec.(*ast.ImportSpec)
// NB: do not assume the program is well-formed!
path, err := strconv.Unquote(spec.Path.Value)
if err != nil {
continue // quietly ignore the error
}
if path == "C" {
continue // skip pseudopackage
}
imports[path] = true
}
}
}
}
return imports
}
// ---------- Internal helpers ----------
// TODO(adonovan): make this a method: func (*token.File) Contains(token.Pos)
func tokenFileContainsPos(f *token.File, pos token.Pos) bool {
p := int(pos)
base := f.Base()
return base <= p && p < base+f.Size()
}

View File

@ -1,163 +0,0 @@
package awstesting
import (
"encoding/json"
"encoding/xml"
"fmt"
"net/url"
"reflect"
"regexp"
"sort"
"testing"
)
// Match is a testing helper to test for testing error by comparing expected
// with a regular expression.
func Match(t *testing.T, regex, expected string) {
if !regexp.MustCompile(regex).Match([]byte(expected)) {
t.Errorf("%q\n\tdoes not match /%s/", expected, regex)
}
}
// AssertURL verifies the expected URL is matches the actual.
func AssertURL(t *testing.T, expect, actual string, msgAndArgs ...interface{}) bool {
expectURL, err := url.Parse(expect)
if err != nil {
t.Errorf(errMsg("unable to parse expected URL", err, msgAndArgs))
return false
}
actualURL, err := url.Parse(actual)
if err != nil {
t.Errorf(errMsg("unable to parse actual URL", err, msgAndArgs))
return false
}
equal(t, expectURL.Host, actualURL.Host, msgAndArgs...)
equal(t, expectURL.Scheme, actualURL.Scheme, msgAndArgs...)
equal(t, expectURL.Path, actualURL.Path, msgAndArgs...)
return AssertQuery(t, expectURL.Query().Encode(), actualURL.Query().Encode(), msgAndArgs...)
}
// AssertQuery verifies the expect HTTP query string matches the actual.
func AssertQuery(t *testing.T, expect, actual string, msgAndArgs ...interface{}) bool {
expectQ, err := url.ParseQuery(expect)
if err != nil {
t.Errorf(errMsg("unable to parse expected Query", err, msgAndArgs))
return false
}
actualQ, err := url.ParseQuery(expect)
if err != nil {
t.Errorf(errMsg("unable to parse actual Query", err, msgAndArgs))
return false
}
// Make sure the keys are the same
if !equal(t, queryValueKeys(expectQ), queryValueKeys(actualQ), msgAndArgs...) {
return false
}
for k, expectQVals := range expectQ {
sort.Strings(expectQVals)
actualQVals := actualQ[k]
sort.Strings(actualQVals)
equal(t, expectQVals, actualQVals, msgAndArgs...)
}
return true
}
// AssertJSON verifies that the expect json string matches the actual.
func AssertJSON(t *testing.T, expect, actual string, msgAndArgs ...interface{}) bool {
expectVal := map[string]interface{}{}
if err := json.Unmarshal([]byte(expect), &expectVal); err != nil {
t.Errorf(errMsg("unable to parse expected JSON", err, msgAndArgs...))
return false
}
actualVal := map[string]interface{}{}
if err := json.Unmarshal([]byte(actual), &actualVal); err != nil {
t.Errorf(errMsg("unable to parse actual JSON", err, msgAndArgs...))
return false
}
return equal(t, expectVal, actualVal, msgAndArgs...)
}
// AssertXML verifies that the expect xml string matches the actual.
func AssertXML(t *testing.T, expect, actual string, container interface{}, msgAndArgs ...interface{}) bool {
expectVal := container
if err := xml.Unmarshal([]byte(expect), &expectVal); err != nil {
t.Errorf(errMsg("unable to parse expected XML", err, msgAndArgs...))
}
actualVal := container
if err := xml.Unmarshal([]byte(actual), &actualVal); err != nil {
t.Errorf(errMsg("unable to parse actual XML", err, msgAndArgs...))
}
return equal(t, expectVal, actualVal, msgAndArgs...)
}
// objectsAreEqual determines if two objects are considered equal.
//
// This function does no assertion of any kind.
//
// Based on github.com/stretchr/testify/assert.ObjectsAreEqual
// Copied locally to prevent non-test build dependencies on testify
func objectsAreEqual(expected, actual interface{}) bool {
if expected == nil || actual == nil {
return expected == actual
}
return reflect.DeepEqual(expected, actual)
}
// Equal asserts that two objects are equal.
//
// assert.Equal(t, 123, 123, "123 and 123 should be equal")
//
// Returns whether the assertion was successful (true) or not (false).
//
// Based on github.com/stretchr/testify/assert.Equal
// Copied locally to prevent non-test build dependencies on testify
func equal(t *testing.T, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if !objectsAreEqual(expected, actual) {
t.Errorf("Not Equal:\n\t%#v (expected)\n\t%#v (actual), %s",
expected, actual, messageFromMsgAndArgs(msgAndArgs))
return false
}
return true
}
func errMsg(baseMsg string, err error, msgAndArgs ...interface{}) string {
message := messageFromMsgAndArgs(msgAndArgs)
if message != "" {
message += ", "
}
return fmt.Sprintf("%s%s, %v", message, baseMsg, err)
}
// Based on github.com/stretchr/testify/assert.messageFromMsgAndArgs
// Copied locally to prevent non-test build dependencies on testify
func messageFromMsgAndArgs(msgAndArgs []interface{}) string {
if len(msgAndArgs) == 0 || msgAndArgs == nil {
return ""
}
if len(msgAndArgs) == 1 {
return msgAndArgs[0].(string)
}
if len(msgAndArgs) > 1 {
return fmt.Sprintf(msgAndArgs[0].(string), msgAndArgs[1:]...)
}
return ""
}
func queryValueKeys(v url.Values) []string {
keys := make([]string, 0, len(v))
for k := range v {
keys = append(keys, k)
}
sort.Strings(keys)
return keys
}

View File

@ -1,64 +0,0 @@
package awstesting_test
import (
"encoding/xml"
"testing"
"github.com/aws/aws-sdk-go/awstesting"
)
func TestAssertJSON(t *testing.T) {
cases := []struct {
e, a string
asserts bool
}{
{
e: `{"RecursiveStruct":{"RecursiveMap":{"foo":{"NoRecurse":"foo"},"bar":{"NoRecurse":"bar"}}}}`,
a: `{"RecursiveStruct":{"RecursiveMap":{"bar":{"NoRecurse":"bar"},"foo":{"NoRecurse":"foo"}}}}`,
asserts: true,
},
}
for i, c := range cases {
mockT := &testing.T{}
if awstesting.AssertJSON(mockT, c.e, c.a) != c.asserts {
t.Error("Assert JSON result was not expected.", i)
}
}
}
func TestAssertXML(t *testing.T) {
cases := []struct {
e, a string
asserts bool
container struct {
XMLName xml.Name `xml:"OperationRequest"`
NS string `xml:"xmlns,attr"`
RecursiveStruct struct {
RecursiveMap struct {
Entries []struct {
XMLName xml.Name `xml:"entries"`
Key string `xml:"key"`
Value struct {
XMLName xml.Name `xml:"value"`
NoRecurse string
}
}
}
}
}
}{
{
e: `<OperationRequest xmlns="https://foo/"><RecursiveStruct xmlns="https://foo/"><RecursiveMap xmlns="https://foo/"><entry xmlns="https://foo/"><key xmlns="https://foo/">foo</key><value xmlns="https://foo/"><NoRecurse xmlns="https://foo/">foo</NoRecurse></value></entry><entry xmlns="https://foo/"><key xmlns="https://foo/">bar</key><value xmlns="https://foo/"><NoRecurse xmlns="https://foo/">bar</NoRecurse></value></entry></RecursiveMap></RecursiveStruct></OperationRequest>`,
a: `<OperationRequest xmlns="https://foo/"><RecursiveStruct xmlns="https://foo/"><RecursiveMap xmlns="https://foo/"><entry xmlns="https://foo/"><key xmlns="https://foo/">bar</key><value xmlns="https://foo/"><NoRecurse xmlns="https://foo/">bar</NoRecurse></value></entry><entry xmlns="https://foo/"><key xmlns="https://foo/">foo</key><value xmlns="https://foo/"><NoRecurse xmlns="https://foo/">foo</NoRecurse></value></entry></RecursiveMap></RecursiveStruct></OperationRequest>`,
asserts: true,
},
}
for i, c := range cases {
// mockT := &testing.T{}
if awstesting.AssertXML(t, c.e, c.a, c.container) != c.asserts {
t.Error("Assert XML result was not expected.", i)
}
}
}

View File

@ -1,20 +0,0 @@
package awstesting
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/client"
"github.com/aws/aws-sdk-go/aws/client/metadata"
"github.com/aws/aws-sdk-go/aws/defaults"
)
// NewClient creates and initializes a generic service client for testing.
func NewClient(cfgs ...*aws.Config) *client.Client {
info := metadata.ClientInfo{
Endpoint: "http://endpoint",
SigningName: "",
}
def := defaults.Get()
def.Config.MergeIn(cfgs...)
return client.New(*def.Config, info, def.Handlers)
}

View File

@ -1,124 +0,0 @@
// +build integration
// Package s3_test runs integration tests for S3
package s3_test
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/awstesting/integration"
"github.com/aws/aws-sdk-go/service/s3"
)
var bucketName *string
var svc *s3.S3
func TestMain(m *testing.M) {
setup()
defer teardown() // only called if we panic
result := m.Run()
teardown()
os.Exit(result)
}
// Create a bucket for testing
func setup() {
svc = s3.New(integration.Session)
bucketName = aws.String(
fmt.Sprintf("aws-sdk-go-integration-%d-%s", time.Now().Unix(), integration.UniqueID()))
for i := 0; i < 10; i++ {
_, err := svc.CreateBucket(&s3.CreateBucketInput{Bucket: bucketName})
if err == nil {
break
}
}
for {
_, err := svc.HeadBucket(&s3.HeadBucketInput{Bucket: bucketName})
if err == nil {
break
}
time.Sleep(1 * time.Second)
}
}
// Delete the bucket
func teardown() {
resp, _ := svc.ListObjects(&s3.ListObjectsInput{Bucket: bucketName})
for _, o := range resp.Contents {
svc.DeleteObject(&s3.DeleteObjectInput{Bucket: bucketName, Key: o.Key})
}
svc.DeleteBucket(&s3.DeleteBucketInput{Bucket: bucketName})
}
func TestWriteToObject(t *testing.T) {
_, err := svc.PutObject(&s3.PutObjectInput{
Bucket: bucketName,
Key: aws.String("key name"),
Body: bytes.NewReader([]byte("hello world")),
})
assert.NoError(t, err)
resp, err := svc.GetObject(&s3.GetObjectInput{
Bucket: bucketName,
Key: aws.String("key name"),
})
assert.NoError(t, err)
b, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, []byte("hello world"), b)
}
func TestPresignedGetPut(t *testing.T) {
putreq, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: bucketName,
Key: aws.String("presigned-key"),
})
var err error
// Presign a PUT request
var puturl string
puturl, err = putreq.Presign(300 * time.Second)
assert.NoError(t, err)
// PUT to the presigned URL with a body
var puthttpreq *http.Request
buf := bytes.NewReader([]byte("hello world"))
puthttpreq, err = http.NewRequest("PUT", puturl, buf)
assert.NoError(t, err)
var putresp *http.Response
putresp, err = http.DefaultClient.Do(puthttpreq)
assert.NoError(t, err)
assert.Equal(t, 200, putresp.StatusCode)
// Presign a GET on the same URL
getreq, _ := svc.GetObjectRequest(&s3.GetObjectInput{
Bucket: bucketName,
Key: aws.String("presigned-key"),
})
var geturl string
geturl, err = getreq.Presign(300 * time.Second)
assert.NoError(t, err)
// Get the body
var getresp *http.Response
getresp, err = http.Get(geturl)
assert.NoError(t, err)
var b []byte
defer getresp.Body.Close()
b, err = ioutil.ReadAll(getresp.Body)
assert.Equal(t, "hello world", string(b))
}

View File

@ -1,29 +0,0 @@
// +build integration
//Package s3crypto provides gucumber integration tests support.
package s3crypto
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3crypto"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@s3crypto", func() {
sess := session.New((&aws.Config{
Region: aws.String("us-west-2"),
}).WithLogLevel(aws.LogDebugWithRequestRetries | aws.LogDebugWithRequestErrors))
encryptionClient := s3crypto.NewEncryptionClient(sess, nil, func(c *s3crypto.EncryptionClient) {
})
gucumber.World["encryptionClient"] = encryptionClient
decryptionClient := s3crypto.NewDecryptionClient(sess)
gucumber.World["decryptionClient"] = decryptionClient
gucumber.World["client"] = s3.New(sess)
})
}

View File

@ -1,18 +0,0 @@
# language: en
@s3crypto @client
Feature: S3 Integration Crypto Tests
Scenario: Get all plaintext fixtures for symmetric masterkey aes cbc
When I get all fixtures for "aes_gcm" from "aws-s3-shared-tests"
Then I decrypt each fixture against "Java" "version_2"
And I compare the decrypted ciphertext to the plaintext
Scenario: Uploading Go's SDK fixtures
When I get all fixtures for "aes_gcm" from "aws-s3-shared-tests"
Then I encrypt each fixture with "kms" "AWS_SDK_TEST_ALIAS" "us-west-2" and "aes_gcm"
And upload "Go" data with folder "version_2"
Scenario: Get all plaintext fixtures for symmetric masterkey aes gcm
When I get all fixtures for "aes_gcm" from "aws-s3-shared-tests"
Then I decrypt each fixture against "Go" "version_2"
And I compare the decrypted ciphertext to the plaintext

View File

@ -1,192 +0,0 @@
// +build integration
// Package s3crypto contains shared step definitions that are used across integration tests
package s3crypto
import (
"bytes"
"encoding/base64"
"errors"
"io/ioutil"
"strings"
"github.com/gucumber/gucumber"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/kms"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3crypto"
)
func init() {
gucumber.When(`^I get all fixtures for "(.+?)" from "(.+?)"$`,
func(cekAlg, bucket string) {
prefix := "plaintext_test_case_"
baseFolder := "crypto_tests/" + cekAlg
s3Client := gucumber.World["client"].(*s3.S3)
out, err := s3Client.ListObjects(&s3.ListObjectsInput{
Bucket: aws.String(bucket),
Prefix: aws.String(baseFolder + "/" + prefix),
})
assert.NoError(gucumber.T, err)
plaintexts := make(map[string][]byte)
for _, obj := range out.Contents {
plaintextKey := obj.Key
ptObj, err := s3Client.GetObject(&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: plaintextKey,
})
assert.NoError(gucumber.T, err)
caseKey := strings.TrimPrefix(*plaintextKey, baseFolder+"/"+prefix)
plaintext, err := ioutil.ReadAll(ptObj.Body)
assert.NoError(gucumber.T, err)
plaintexts[caseKey] = plaintext
}
gucumber.World["baseFolder"] = baseFolder
gucumber.World["bucket"] = bucket
gucumber.World["plaintexts"] = plaintexts
})
gucumber.Then(`^I decrypt each fixture against "(.+?)" "(.+?)"$`, func(lang, version string) {
plaintexts := gucumber.World["plaintexts"].(map[string][]byte)
baseFolder := gucumber.World["baseFolder"].(string)
bucket := gucumber.World["bucket"].(string)
prefix := "ciphertext_test_case_"
s3Client := gucumber.World["client"].(*s3.S3)
s3CryptoClient := gucumber.World["decryptionClient"].(*s3crypto.DecryptionClient)
language := "language_" + lang
ciphertexts := make(map[string][]byte)
for caseKey := range plaintexts {
cipherKey := baseFolder + "/" + version + "/" + language + "/" + prefix + caseKey
// To get metadata for encryption key
ctObj, err := s3Client.GetObject(&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: &cipherKey,
})
if err != nil {
continue
}
// We don't support wrap, so skip it
if *ctObj.Metadata["X-Amz-Wrap-Alg"] != "kms" {
continue
}
//masterkeyB64 := ctObj.Metadata["Masterkey"]
//masterkey, err := base64.StdEncoding.DecodeString(*masterkeyB64)
//assert.NoError(T, err)
//s3CryptoClient.Config.MasterKey = masterkey
ctObj, err = s3CryptoClient.GetObject(&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: &cipherKey,
},
)
assert.NoError(gucumber.T, err)
ciphertext, err := ioutil.ReadAll(ctObj.Body)
assert.NoError(gucumber.T, err)
ciphertexts[caseKey] = ciphertext
}
gucumber.World["ciphertexts"] = ciphertexts
})
gucumber.And(`^I compare the decrypted ciphertext to the plaintext$`, func() {
plaintexts := gucumber.World["plaintexts"].(map[string][]byte)
ciphertexts := gucumber.World["ciphertexts"].(map[string][]byte)
for caseKey, ciphertext := range ciphertexts {
assert.Equal(gucumber.T, len(plaintexts[caseKey]), len(ciphertext))
assert.True(gucumber.T, bytes.Equal(plaintexts[caseKey], ciphertext))
}
})
gucumber.Then(`^I encrypt each fixture with "(.+?)" "(.+?)" "(.+?)" and "(.+?)"$`, func(kek, v1, v2, cek string) {
var handler s3crypto.CipherDataGenerator
var builder s3crypto.ContentCipherBuilder
switch kek {
case "kms":
arn, err := getAliasInformation(v1, v2)
assert.Nil(gucumber.T, err)
b64Arn := base64.StdEncoding.EncodeToString([]byte(arn))
assert.Nil(gucumber.T, err)
gucumber.World["Masterkey"] = b64Arn
handler = s3crypto.NewKMSKeyGenerator(kms.New(session.New(&aws.Config{
Region: &v2,
})), arn)
assert.Nil(gucumber.T, err)
default:
gucumber.T.Skip()
}
switch cek {
case "aes_gcm":
builder = s3crypto.AESGCMContentCipherBuilder(handler)
default:
gucumber.T.Skip()
}
sess := session.New(&aws.Config{
Region: aws.String("us-west-2"),
})
c := s3crypto.NewEncryptionClient(sess, builder, func(c *s3crypto.EncryptionClient) {
})
gucumber.World["encryptionClient"] = c
gucumber.World["cek"] = cek
})
gucumber.And(`^upload "(.+?)" data with folder "(.+?)"$`, func(language, folder string) {
c := gucumber.World["encryptionClient"].(*s3crypto.EncryptionClient)
cek := gucumber.World["cek"].(string)
bucket := gucumber.World["bucket"].(string)
plaintexts := gucumber.World["plaintexts"].(map[string][]byte)
key := gucumber.World["Masterkey"].(string)
for caseKey, plaintext := range plaintexts {
input := &s3.PutObjectInput{
Bucket: &bucket,
Key: aws.String("crypto_tests/" + cek + "/" + folder + "/language_" + language + "/ciphertext_test_case_" + caseKey),
Body: bytes.NewReader(plaintext),
Metadata: map[string]*string{
"Masterkey": &key,
},
}
_, err := c.PutObject(input)
assert.Nil(gucumber.T, err)
}
})
}
func getAliasInformation(alias, region string) (string, error) {
arn := ""
svc := kms.New(session.New(&aws.Config{
Region: &region,
}))
truncated := true
var marker *string
for truncated {
out, err := svc.ListAliases(&kms.ListAliasesInput{
Marker: marker,
})
if err != nil {
return arn, err
}
for _, aliasEntry := range out.Aliases {
if *aliasEntry.AliasName == "alias/"+alias {
return *aliasEntry.AliasArn, nil
}
}
truncated = *out.Truncated
marker = out.NextMarker
}
return "", errors.New("The alias " + alias + " does not exist in your account. Please add the proper alias to a key")
}

View File

@ -1,163 +0,0 @@
// +build integration
// Package s3manager provides
package s3manager
import (
"bytes"
"crypto/md5"
"fmt"
"io"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/awstesting/integration"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var integBuf12MB = make([]byte, 1024*1024*12)
var integMD512MB = fmt.Sprintf("%x", md5.Sum(integBuf12MB))
var bucketName *string
func TestMain(m *testing.M) {
setup()
defer teardown() // only called if we panic
result := m.Run()
teardown()
os.Exit(result)
}
func setup() {
// Create a bucket for testing
svc := s3.New(integration.Session)
bucketName = aws.String(
fmt.Sprintf("aws-sdk-go-integration-%d-%s", time.Now().Unix(), integration.UniqueID()))
for i := 0; i < 10; i++ {
_, err := svc.CreateBucket(&s3.CreateBucketInput{Bucket: bucketName})
if err == nil {
break
}
}
for {
_, err := svc.HeadBucket(&s3.HeadBucketInput{Bucket: bucketName})
if err == nil {
break
}
time.Sleep(1 * time.Second)
}
}
// Delete the bucket
func teardown() {
svc := s3.New(integration.Session)
objs, _ := svc.ListObjects(&s3.ListObjectsInput{Bucket: bucketName})
for _, o := range objs.Contents {
svc.DeleteObject(&s3.DeleteObjectInput{Bucket: bucketName, Key: o.Key})
}
uploads, _ := svc.ListMultipartUploads(&s3.ListMultipartUploadsInput{Bucket: bucketName})
for _, u := range uploads.Uploads {
svc.AbortMultipartUpload(&s3.AbortMultipartUploadInput{
Bucket: bucketName,
Key: u.Key,
UploadId: u.UploadId,
})
}
svc.DeleteBucket(&s3.DeleteBucketInput{Bucket: bucketName})
}
type dlwriter struct {
buf []byte
}
func newDLWriter(size int) *dlwriter {
return &dlwriter{buf: make([]byte, size)}
}
func (d dlwriter) WriteAt(p []byte, pos int64) (n int, err error) {
if pos > int64(len(d.buf)) {
return 0, io.EOF
}
written := 0
for i, b := range p {
if i >= len(d.buf) {
break
}
d.buf[pos+int64(i)] = b
written++
}
return written, nil
}
func validate(t *testing.T, key string, md5value string) {
mgr := s3manager.NewDownloader(integration.Session)
params := &s3.GetObjectInput{Bucket: bucketName, Key: &key}
w := newDLWriter(1024 * 1024 * 20)
n, err := mgr.Download(w, params)
assert.NoError(t, err)
assert.Equal(t, md5value, fmt.Sprintf("%x", md5.Sum(w.buf[0:n])))
}
func TestUploadConcurrently(t *testing.T) {
key := "12mb-1"
mgr := s3manager.NewUploader(integration.Session)
out, err := mgr.Upload(&s3manager.UploadInput{
Bucket: bucketName,
Key: &key,
Body: bytes.NewReader(integBuf12MB),
})
assert.NoError(t, err)
assert.NotEqual(t, "", out.UploadID)
assert.Regexp(t, `^https?://.+/`+key+`$`, out.Location)
validate(t, key, integMD512MB)
}
func TestUploadFailCleanup(t *testing.T) {
svc := s3.New(integration.Session)
// Break checksum on 2nd part so it fails
part := 0
svc.Handlers.Build.PushBack(func(r *request.Request) {
if r.Operation.Name == "UploadPart" {
if part == 1 {
r.HTTPRequest.Header.Set("X-Amz-Content-Sha256", "000")
}
part++
}
})
key := "12mb-leave"
mgr := s3manager.NewUploaderWithClient(svc, func(u *s3manager.Uploader) {
u.LeavePartsOnError = false
})
_, err := mgr.Upload(&s3manager.UploadInput{
Bucket: bucketName,
Key: &key,
Body: bytes.NewReader(integBuf12MB),
})
assert.Error(t, err)
assert.NotContains(t, err.Error(), "MissingRegion")
uploadID := ""
if merr, ok := err.(s3manager.MultiUploadFailure); ok {
uploadID = merr.UploadID()
}
assert.NotEmpty(t, uploadID)
_, err = svc.ListParts(&s3.ListPartsInput{
Bucket: bucketName, Key: &key, UploadId: &uploadID})
assert.Error(t, err)
}

View File

@ -1,44 +0,0 @@
// +build integration
// Package integration performs initialization and validation for integration
// tests.
package integration
import (
"crypto/rand"
"fmt"
"io"
"os"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
)
// Session is a shared session for all integration tests to use.
var Session = session.Must(session.NewSession())
func init() {
logLevel := Session.Config.LogLevel
if os.Getenv("DEBUG") != "" {
logLevel = aws.LogLevel(aws.LogDebug)
}
if os.Getenv("DEBUG_SIGNING") != "" {
logLevel = aws.LogLevel(aws.LogDebugWithSigning)
}
if os.Getenv("DEBUG_BODY") != "" {
logLevel = aws.LogLevel(aws.LogDebugWithSigning | aws.LogDebugWithHTTPBody)
}
Session.Config.LogLevel = logLevel
if aws.StringValue(Session.Config.Region) == "" {
panic("AWS_REGION must be configured to run integration tests")
}
}
// UniqueID returns a unique UUID-like identifier for use in generating
// resources for integration tests.
func UniqueID() string {
uuid := make([]byte, 16)
io.ReadFull(rand.Reader, uuid)
return fmt.Sprintf("%x", uuid)
}

View File

@ -1,14 +0,0 @@
#language en
@acm @client
Feature: AWS Certificate Manager
Scenario: Making a request
When I call the "ListCertificates" API
Then the request should be successful
Scenario: Handling errors
When I attempt to call the "GetCertificate" API with:
| CertificateArn | arn:aws:acm:region:123456789012:certificate/12345678-1234-1234-1234-123456789012 |
Then I expect the response error code to be "ResourceNotFoundException"
And I expect the response error message not be empty

View File

@ -1,16 +0,0 @@
// +build integration
//Package acm provides gucumber integration tests support.
package acm
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/acm"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@acm", func() {
gucumber.World["client"] = acm.New(smoke.Session)
})
}

View File

@ -1,16 +0,0 @@
# language: en
@apigateway @client
Feature: Amazon API Gateway
Scenario: Making a request
When I call the "GetAccountRequest" API
Then the request should be successful
Scenario: Handing errors
When I attempt to call the "GetRestApi" API with:
| RestApiId | api123 |
Then I expect the response error code to be "NotFoundException"
And I expect the response error message to include:
"""
Invalid REST API identifier specified
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package apigateway provides gucumber integration tests support.
package apigateway
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/apigateway"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@apigateway", func() {
gucumber.World["client"] = apigateway.New(smoke.Session)
})
}

View File

@ -1,8 +0,0 @@
#language en
@applicationdiscoveryservice @client
Feature: AWS Application Discovery Service
Scenario: Making a request
When I call the "DescribeAgents" API
Then the request should be successful

View File

@ -1,19 +0,0 @@
// +build integration
//Package applicationdiscoveryservice provides gucumber integration tests support.
package applicationdiscoveryservice
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/applicationdiscoveryservice"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@applicationdiscoveryservice", func() {
gucumber.World["client"] = applicationdiscoveryservice.New(
smoke.Session, &aws.Config{Region: aws.String("us-west-2")},
)
})
}

View File

@ -1,18 +0,0 @@
# language: en
@autoscaling @client
Feature: Auto Scaling
Scenario: Making a request
When I call the "DescribeScalingProcessTypes" API
Then the value at "Processes" should be a list
Scenario: Handing errors
When I attempt to call the "CreateLaunchConfiguration" API with:
| LaunchConfigurationName | |
| ImageId | ami-12345678 |
| InstanceType | m1.small |
Then I expect the response error code to be "InvalidParameter"
And I expect the response error message to include:
"""
LaunchConfigurationName
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package autoscaling provides gucumber integration tests support.
package autoscaling
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/autoscaling"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@autoscaling", func() {
gucumber.World["client"] = autoscaling.New(smoke.Session)
})
}

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudformation provides gucumber integration tests support.
package cloudformation
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudformation"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudformation", func() {
gucumber.World["client"] = cloudformation.New(smoke.Session)
})
}

View File

@ -1,17 +0,0 @@
# language: en
@cloudformation @client
Feature: AWS CloudFormation
Scenario: Making a request
When I call the "ListStacks" API
Then the value at "StackSummaries" should be a list
Scenario: Handling errors
When I attempt to call the "CreateStack" API with:
| StackName | fakestack |
| TemplateURL | http://s3.amazonaws.com/foo/bar |
Then I expect the response error code to be "ValidationError"
And I expect the response error message to include:
"""
TemplateURL must reference a valid S3 object to which you have access.
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudfront provides gucumber integration tests support.
package cloudfront
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudfront"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudfront", func() {
gucumber.World["client"] = cloudfront.New(smoke.Session)
})
}

View File

@ -1,17 +0,0 @@
# language: en
@cloudfront @client
Feature: Amazon CloudFront
Scenario: Making a basic request
When I call the "ListDistributions" API with:
| MaxItems | 1 |
Then the value at "DistributionList.Items" should be a list
Scenario: Error handling
When I attempt to call the "GetDistribution" API with:
| Id | fake-id |
Then I expect the response error code to be "NoSuchDistribution"
And I expect the response error message to include:
"""
The specified distribution does not exist.
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudhsm provides gucumber integration tests support.
package cloudhsm
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudhsm"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudhsm", func() {
gucumber.World["client"] = cloudhsm.New(smoke.Session)
})
}

View File

@ -1,16 +0,0 @@
# language: en
@cloudhsm @client
Feature: Amazon CloudHSM
Scenario: Making a request
When I call the "ListHapgs" API
Then the value at "HapgList" should be a list
Scenario: Handling errors
When I attempt to call the "DescribeHapg" API with:
| HapgArn | bogus-arn |
Then I expect the response error code to be "ValidationException"
And I expect the response error message to include:
"""
Value 'bogus-arn' at 'hapgArn' failed to satisfy constraint
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudsearch provides gucumber integration tests support.
package cloudsearch
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudsearch"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudsearch", func() {
gucumber.World["client"] = cloudsearch.New(smoke.Session)
})
}

View File

@ -1,16 +0,0 @@
# language: en
@cloudsearch @client
Feature: Amazon CloudSearch
Scenario: Making a request
When I call the "DescribeDomains" API
Then the response should contain a "DomainStatusList"
Scenario: Handling errors
When I attempt to call the "DescribeIndexFields" API with:
| DomainName | fakedomain |
Then I expect the response error code to be "ResourceNotFound"
And I expect the response error message to include:
"""
Domain not found: fakedomain
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudtrail provides gucumber integration tests support.
package cloudtrail
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudtrail"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudtrail", func() {
gucumber.World["client"] = cloudtrail.New(smoke.Session)
})
}

View File

@ -1,16 +0,0 @@
# language: en
@cloudtrail @client
Feature: AWS CloudTrail
Scenario: Making a request
When I call the "DescribeTrails" API
Then the response should contain a "trailList"
Scenario: Handling errors
When I attempt to call the "DeleteTrail" API with:
| Name | faketrail |
Then I expect the response error code to be "TrailNotFoundException"
And I expect the response error message to include:
"""
Unknown trail
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudwatch provides gucumber integration tests support.
package cloudwatch
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudwatch"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudwatch", func() {
gucumber.World["client"] = cloudwatch.New(smoke.Session)
})
}

View File

@ -1,19 +0,0 @@
# language: en
@cloudwatch @monitoring @client
Feature: Amazon CloudWatch
Scenario: Making a request
When I call the "ListMetrics" API with:
| Namespace | AWS/EC2 |
Then the value at "Metrics" should be a list
Scenario: Handling errors
When I attempt to call the "SetAlarmState" API with:
| AlarmName | abc |
| StateValue | mno |
| StateReason | xyz |
Then I expect the response error code to be "ValidationError"
And I expect the response error message to include:
"""
failed to satisfy constraint
"""

View File

@ -1,16 +0,0 @@
// +build integration
//Package cloudwatchlogs provides gucumber integration tests support.
package cloudwatchlogs
import (
"github.com/aws/aws-sdk-go/awstesting/integration/smoke"
"github.com/aws/aws-sdk-go/service/cloudwatchlogs"
"github.com/gucumber/gucumber"
)
func init() {
gucumber.Before("@cloudwatchlogs", func() {
gucumber.World["client"] = cloudwatchlogs.New(smoke.Session)
})
}

View File

@ -1,17 +0,0 @@
# language: en
@cloudwatchlogs @logs
Feature: Amazon CloudWatch Logs
Scenario: Making a request
When I call the "DescribeLogGroups" API
Then the value at "logGroups" should be a list
Scenario: Handling errors
When I attempt to call the "GetLogEvents" API with:
| logGroupName | fakegroup |
| logStreamName | fakestream |
Then I expect the response error code to be "ResourceNotFoundException"
And I expect the response error message to include:
"""
The specified log group does not exist.
"""

Some files were not shown because too many files have changed in this diff Show More