mirror of
https://github.com/grafana/grafana.git
synced 2025-02-25 18:55:37 -06:00
Merge branch 'main' of https://github.com/grafana/grafana into #45498-entity-events
This commit is contained in:
commit
c3d51424ba
1
.github/CODEOWNERS
vendored
1
.github/CODEOWNERS
vendored
@ -58,6 +58,7 @@ go.sum @grafana/backend-platform
|
||||
/pkg/services/live/ @grafana/grafana-edge-squad
|
||||
/pkg/services/searchV2/ @grafana/grafana-edge-squad
|
||||
/pkg/services/store/ @grafana/grafana-edge-squad
|
||||
/pkg/services/export/ @grafana/grafana-edge-squad
|
||||
/pkg/infra/filestore/ @grafana/grafana-edge-squad
|
||||
pkg/tsdb/testdatasource/sims/ @grafana/grafana-edge-squad
|
||||
|
||||
|
@ -20,17 +20,15 @@ First, you need to set up MySQL or Postgres on another server and configure Graf
|
||||
You can find the configuration for doing that in the [[database]]({{< relref "../administration/configuration.md#database" >}}) section in the Grafana config.
|
||||
Grafana will now persist all long term data in the database. How to configure the database for high availability is out of scope for this guide. We recommend finding an expert on the database you're using.
|
||||
|
||||
## Alerting
|
||||
## Alerting high availability
|
||||
|
||||
**Grafana 8 alerts**
|
||||
Grafana alerting provides a new [highly-available model]({{< relref "../alerting/unified-alerting/high-availability/_index.md" >}}). It also preserves the semantics of legacy dashboard alerting by executing all alerts on every server and by sending notifications only once per alert. Load distribution between servers is not supported at this time.
|
||||
|
||||
Grafana 8 Alerts provides a new highly-available model under the hood. It preserves the previous semantics by executing all alerts on every server and notifications are sent only once per alert. There is no support for load distribution between servers at this time.
|
||||
|
||||
For configuration, [follow the guide]({{< relref "../alerting/unified-alerting/high-availability.md" >}}).
|
||||
For instructions on setting up alerting high availability, see [enable alerting high availability]({{< relref "../alerting/unified-alerting/high-availability/enable-alerting-ha.md" >}}).
|
||||
|
||||
**Legacy dashboard alerts**
|
||||
|
||||
Legacy Grafana alerting supports a limited form of high availability. [Alert notifications]({{< relref "../alerting/old-alerting/notifications.md" >}}) are deduplicated when running multiple servers. This means all alerts are executed on every server but alert notifications are only sent once per alert. Grafana does not support load distribution between servers.
|
||||
Legacy Grafana alerting supports a limited form of high availability. In this model, [alert notifications]({{< relref "../alerting/old-alerting/notifications.md" >}}) are deduplicated when running multiple servers. This means all alerts are executed on every server, but alert notifications are only sent once per alert. Grafana does not support load distribution between servers.
|
||||
|
||||
## Grafana Live
|
||||
|
||||
|
@ -8,7 +8,7 @@ weight = 113
|
||||
|
||||
Grafana 8.0 has new and improved alerting that centralizes alerting information in a single, searchable view. It is enabled by default for all new OSS instances, and is an [opt-in]({{< relref "./opt-in.md" >}}) feature for older installations that still use legacy dashboard alerting. We encourage you to create issues in the Grafana GitHub repository for bugs found while testing Grafana alerting. See also, [What's New with Grafana alerting]({{< relref "./difference-old-new.md" >}}).
|
||||
|
||||
> Refer to [Fine-grained access control]({{< relref "../enterprise/access-control/_index.md" >}}) in Grafana Enterprise to learn more about controlling access to alerts using fine-grained permissions.
|
||||
> Refer to [Fine-grained access control]({{< relref "../../enterprise/access-control/_index.md" >}}) in Grafana Enterprise to learn more about controlling access to alerts using fine-grained permissions.
|
||||
|
||||
When Grafana alerting is enabled, you can:
|
||||
|
||||
|
@ -1,44 +0,0 @@
|
||||
+++
|
||||
title = " High availability"
|
||||
description = "High Availability"
|
||||
keywords = ["grafana", "alerting", "tutorials", "ha", "high availability"]
|
||||
weight = 450
|
||||
+++
|
||||
|
||||
# High availability
|
||||
|
||||
The Grafana alerting system has two main components: a `Scheduler` and an internal `Alertmanager`. The `Scheduler` is responsible for the evaluation of your [alert rules]({{< relref "./fundamentals/evaluate-grafana-alerts.md" >}}) while the internal Alertmanager takes care of the **routing** and **grouping**.
|
||||
|
||||
When it comes to running Grafana alerting in high availability the operational mode of the scheduler is unaffected such that all alerts continue be evaluated in each Grafana instance. Rather the operational change happens in the Alertmanager which **deduplicates** alert notifications across Grafana instances.
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/high-availability-ua.png" class="docs-image--no-shadow" max-width= "750px" caption="High availability" >}}
|
||||
|
||||
The coordination between Grafana instances happens via [a Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol). Alerts are not gossiped between instances. It is expected that each scheduler delivers the same alerts to each Alertmanager.
|
||||
|
||||
The two types of messages that are gossiped between instances are:
|
||||
|
||||
- Notification logs: Who (which instance) notified what (which alert)
|
||||
- Silences: If an alert should fire or not
|
||||
|
||||
These two states are persisted in the database periodically and when Grafana is gracefully shutdown.
|
||||
|
||||
## Enable high availability
|
||||
|
||||
To enable high availability support you need to add at least 1 Grafana instance to the [`[ha_peer]` configuration option]({{<relref"../../administration/configuration.md#unified_alerting">}}) within the `[unified_alerting]` section:
|
||||
|
||||
1. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the `[unified_alerting]` section.
|
||||
2. Set `[ha_peers]` to the number of hosts for each grafana instance in the cluster (using a format of host:port) e.g. `ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094`
|
||||
3. Gossiping of notifications and silences uses both TCP and UDP port 9094. Each Grafana instance will need to be able to accept incoming connections on these ports.
|
||||
4. Set `[ha_listen_address]` to the instance IP address using a format of host:port (or the [Pod's](https://kubernetes.io/docs/concepts/workloads/pods/) IP in the case of using Kubernetes) by default it is set to listen to all interfaces (`0.0.0.0`).
|
||||
|
||||
## Kubernetes
|
||||
|
||||
If you are using Kubernetes, you can expose the pod IP [through an environment variable](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) via the container definition such as:
|
||||
|
||||
```bash
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
@ -0,0 +1,25 @@
|
||||
+++
|
||||
title = " About alerting high availability"
|
||||
description = "High availability"
|
||||
keywords = ["grafana", "alerting", "tutorials", "ha", "high availability"]
|
||||
weight = 450
|
||||
+++
|
||||
|
||||
# About alerting high availability
|
||||
|
||||
The Grafana alerting system has two main components: a `Scheduler` and an internal `Alertmanager`. The `Scheduler` evaluates your [alert rules]({{< relref "../fundamentals/evaluate-grafana-alerts.md" >}}), while the internal Alertmanager manages **routing** and **grouping**.
|
||||
|
||||
When running Grafana alerting in high availability, the operational mode of the scheduler remains unaffected, and each Grafana instance evaluates all alerts. The operational change happens in the Alertmanager when it deduplicates alert notifications across Grafana instances.
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/high-availability-ua.png" class="docs-image--no-shadow" max-width= "750px" caption="High availability" >}}
|
||||
|
||||
The coordination between Grafana instances happens via [a Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol). Alerts are not gossiped between instances and each scheduler delivers the same volume of alerts to each Alertmanager.
|
||||
|
||||
The two types of messages gossiped between Grafana instances are:
|
||||
|
||||
- Notification logs: Who (which instance) notified what (which alert).
|
||||
- Silences: If an alert should fire or not.
|
||||
|
||||
The notification logs and silences are persisted in the database periodically and during a graceful Grafana shut down.
|
||||
|
||||
For configuration instructions, refer to [enable alerting high availability]({{< relref "./enable-alerting-ha.md" >}}).
|
@ -0,0 +1,36 @@
|
||||
+++
|
||||
title = "Enable alerting high availability"
|
||||
description = "Enable alerting high availability"
|
||||
keywords = ["grafana", "alerting", "tutorials", "ha", "high availability"]
|
||||
weight = 450
|
||||
+++
|
||||
|
||||
# Enable alerting high availability
|
||||
|
||||
You can enable [alerting high availability]({{< relref "./_index.md" >}}) support by updating the Grafana configuration file. On Kubernetes, you can enable alerting high availability by updating the Kubernetes container definition.
|
||||
|
||||
## Update Grafana configuration file
|
||||
|
||||
### Before you begin
|
||||
|
||||
Since gossiping of notifications and silences uses both TCP and UDP port `9094`, ensure that each Grafana instance is able to accept incoming connections on these ports.
|
||||
|
||||
**To enable high availability support:**
|
||||
|
||||
1. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the `[unified_alerting]` section.
|
||||
2. Set `[ha_peers]` to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, `ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094`.
|
||||
You must have at least one (1) Grafana instance added to the [`[ha_peer]` section.
|
||||
3. Set `[ha_listen_address]` to the instance IP address using a format of `host:port` (or the [Pod's](https://kubernetes.io/docs/concepts/workloads/pods/) IP in the case of using Kubernetes).
|
||||
By default, it is set to listen to all interfaces (`0.0.0.0`).
|
||||
|
||||
## Update Kubernetes container definition
|
||||
|
||||
If you are using Kubernetes, you can expose the pod IP [through an environment variable](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) via the container definition such as:
|
||||
|
||||
```bash
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
@ -13,7 +13,7 @@ Dashboard snapshots are static . Queries and expressions cannot be re-executed f
|
||||
Before you begin, ensure that you have configured a data source. See also:
|
||||
|
||||
- [Working with Grafana dashboard UI]({{< relref "./dashboard-ui/_index.md" >}})
|
||||
- [Dashboard folders]({{< relref "./dashboard_folders.md" >}})
|
||||
- [Dashboard folders]({{< relref "./dashboard-folders.md" >}})
|
||||
- [Create dashboard]({{< relref "./dashboard-create" >}})
|
||||
- [Manage dashboards]({{< relref "./dashboard-manage.md" >}})
|
||||
- [Annotations]({{< relref "./annotations.md" >}})
|
||||
@ -22,7 +22,7 @@ Before you begin, ensure that you have configured a data source. See also:
|
||||
- [Keyboard shortcuts]({{< relref "./shortcuts.md" >}})
|
||||
- [Reporting]({{< relref "./reporting.md" >}})
|
||||
- [Time range controls]({{< relref "./time-range-controls.md" >}})
|
||||
- [Dashboard version history]({{< relref "./dashboard_history.md" >}})
|
||||
- [Dashboard version history]({{< relref "./dashboard-history.md" >}})
|
||||
- [Dashboard export and import]({{< relref "./export-import.md" >}})
|
||||
- [Dashboard JSON model]({{< relref "./json-model.md" >}})
|
||||
- [Scripted dashboards]({{< relref "./scripted-dashboards.md" >}})
|
||||
|
@ -13,7 +13,7 @@ Grafana supports user authentication through Okta, which is useful when you want
|
||||
## Before you begin
|
||||
|
||||
- To configure SAML integration with Okta, create integration inside the Okta organization first. [Add integration in Okta](https://help.okta.com/en/prod/Content/Topics/Apps/apps-overview-add-apps.htm)
|
||||
- Ensure you have permission to administer SAML authentication. For more information about permissions, refer to [About users and permissions]({{< relref "../manage-users-and-permissions/about-users-and-permissions.md#">}}).
|
||||
- Ensure you have permission to administer SAML authentication. For more information about permissions, refer to [About users and permissions]({{< relref "../../administration/manage-users-and-permissions/about-users-and-permissions.md#">}}).
|
||||
|
||||
**To set up SAML with Okta:**
|
||||
|
||||
|
@ -214,7 +214,7 @@ This release includes a series of features that build on our new usage analytics
|
||||
|
||||
### SAML Role and Team Sync
|
||||
|
||||
SAML support in Grafana Enterprise is improved by adding Role and Team Sync. Read more about how to use these features in the [SAML team sync documentation]({{< relref "../enterprise/saml.md#configure-team-sync" >}}).
|
||||
SAML support in Grafana Enterprise is improved by adding Role and Team Sync. Read more about how to use these features in the [SAML team sync documentation]({{< relref "../enterprise/saml/configure-saml.md#configure-team-sync" >}}).
|
||||
|
||||
### Okta OAuth Team Sync
|
||||
|
||||
|
@ -202,7 +202,7 @@ For more information, refer to [Export logs of usage insights]({{< relref "../en
|
||||
|
||||
### New audit log events
|
||||
|
||||
New log out events are logged based on when a token expires or is revoked, as well as [SAML Single Logout]({{< relref "../enterprise/saml.md#single-logout" >}}). A `tokenId` field was added to all audit logs to help understand which session was logged out of.
|
||||
New log out events are logged based on when a token expires or is revoked, as well as [SAML Single Logout]({{< relref "../enterprise/saml/configure-saml.md#single-logout" >}}). A `tokenId` field was added to all audit logs to help understand which session was logged out of.
|
||||
|
||||
Also, a counter for audit log writing actions with status (success / failure) and logger (loki / file / console) labels was added.
|
||||
|
||||
|
@ -416,6 +416,7 @@ export type ExploreQueryFieldProps<
|
||||
|
||||
export interface QueryEditorHelpProps<TQuery extends DataQuery = DataQuery> {
|
||||
datasource: DataSourceApi<TQuery>;
|
||||
query: TQuery;
|
||||
onClickExample: (query: TQuery) => void;
|
||||
exploreId?: any;
|
||||
}
|
||||
|
@ -50,6 +50,7 @@ export interface FeatureToggles {
|
||||
saveDashboardDrawer?: boolean;
|
||||
storage?: boolean;
|
||||
alertProvisioning?: boolean;
|
||||
export?: boolean;
|
||||
storageLocalUpload?: boolean;
|
||||
azureMonitorResourcePickerForMetrics?: boolean;
|
||||
explore2Dashboard?: boolean;
|
||||
|
@ -499,7 +499,7 @@ describe('GraphNG utils', () => {
|
||||
],
|
||||
},
|
||||
],
|
||||
"length": 10,
|
||||
"length": 12,
|
||||
}
|
||||
`);
|
||||
});
|
||||
|
@ -13,7 +13,9 @@ import { nullToUndefThreshold } from './nullToUndefThreshold';
|
||||
import { XYFieldMatchers } from './types';
|
||||
|
||||
function isVisibleBarField(f: Field) {
|
||||
return f.config.custom?.drawStyle === GraphDrawStyle.Bars && !f.config.custom?.hideFrom?.viz;
|
||||
return (
|
||||
f.type === FieldType.number && f.config.custom?.drawStyle === GraphDrawStyle.Bars && !f.config.custom?.hideFrom?.viz
|
||||
);
|
||||
}
|
||||
|
||||
// will mutate the DataFrame's fields' values
|
||||
@ -105,6 +107,8 @@ export function preparePlotFrame(frames: DataFrame[], dimFields: XYFieldMatchers
|
||||
vals.push(undefined, undefined);
|
||||
}
|
||||
});
|
||||
|
||||
alignedFrame.length += 2;
|
||||
}
|
||||
|
||||
return alignedFrame;
|
||||
|
@ -230,29 +230,29 @@ describe('preparePlotData2', () => {
|
||||
],
|
||||
});
|
||||
expect(preparePlotData2(df, getStackingGroups(df))).toMatchInlineSnapshot(`
|
||||
Array [
|
||||
Array [
|
||||
9997,
|
||||
9998,
|
||||
9999,
|
||||
],
|
||||
Array [
|
||||
-10,
|
||||
20,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
10,
|
||||
10,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
20,
|
||||
20,
|
||||
20,
|
||||
],
|
||||
]
|
||||
`);
|
||||
Array [
|
||||
Array [
|
||||
9997,
|
||||
9998,
|
||||
9999,
|
||||
],
|
||||
Array [
|
||||
-10,
|
||||
20,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
10,
|
||||
10,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
20,
|
||||
20,
|
||||
20,
|
||||
],
|
||||
]
|
||||
`);
|
||||
});
|
||||
|
||||
it('standard', () => {
|
||||
@ -289,14 +289,14 @@ describe('preparePlotData2', () => {
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
0,
|
||||
30,
|
||||
20,
|
||||
10,
|
||||
10,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
20,
|
||||
50,
|
||||
40,
|
||||
30,
|
||||
30,
|
||||
30,
|
||||
],
|
||||
]
|
||||
`);
|
||||
@ -345,19 +345,19 @@ describe('preparePlotData2', () => {
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
10,
|
||||
10,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
-30,
|
||||
0,
|
||||
30,
|
||||
20,
|
||||
-10,
|
||||
],
|
||||
Array [
|
||||
-40,
|
||||
-10,
|
||||
-20,
|
||||
-20,
|
||||
-20,
|
||||
],
|
||||
Array [
|
||||
-30,
|
||||
-30,
|
||||
-30,
|
||||
],
|
||||
]
|
||||
`);
|
||||
@ -413,14 +413,14 @@ describe('preparePlotData2', () => {
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
0,
|
||||
30,
|
||||
20,
|
||||
10,
|
||||
10,
|
||||
10,
|
||||
],
|
||||
Array [
|
||||
20,
|
||||
50,
|
||||
40,
|
||||
30,
|
||||
30,
|
||||
30,
|
||||
],
|
||||
Array [
|
||||
1,
|
||||
@ -580,13 +580,13 @@ describe('auto stacking groups', () => {
|
||||
"dir": -1,
|
||||
"series": Array [
|
||||
1,
|
||||
3,
|
||||
],
|
||||
},
|
||||
Object {
|
||||
"dir": 1,
|
||||
"series": Array [
|
||||
2,
|
||||
3,
|
||||
],
|
||||
},
|
||||
]
|
||||
@ -622,11 +622,6 @@ describe('auto stacking groups', () => {
|
||||
"series": Array [
|
||||
1,
|
||||
2,
|
||||
],
|
||||
},
|
||||
Object {
|
||||
"dir": -1,
|
||||
"series": Array [
|
||||
3,
|
||||
],
|
||||
},
|
||||
|
@ -115,16 +115,17 @@ export function getStackingGroups(frame: DataFrame) {
|
||||
// will this be stacked up or down after any transforms applied
|
||||
let vals = values.toArray();
|
||||
let transform = custom.transform;
|
||||
let firstValue = vals.find((v) => v != null);
|
||||
let stackDir =
|
||||
transform === GraphTransform.Constant
|
||||
? vals[0] > 0
|
||||
? firstValue >= 0
|
||||
? StackDirection.Pos
|
||||
: StackDirection.Neg
|
||||
: transform === GraphTransform.NegativeY
|
||||
? vals.some((v) => v > 0)
|
||||
? firstValue >= 0
|
||||
? StackDirection.Neg
|
||||
: StackDirection.Pos
|
||||
: vals.some((v) => v > 0)
|
||||
: firstValue >= 0
|
||||
? StackDirection.Pos
|
||||
: StackDirection.Neg;
|
||||
|
||||
|
@ -523,6 +523,11 @@ func (hs *HTTPServer) registerRoutes() {
|
||||
adminRoute.Get("/crawler/status", reqGrafanaAdmin, routing.Wrap(hs.ThumbService.CrawlerStatus))
|
||||
}
|
||||
|
||||
if hs.Features.IsEnabled(featuremgmt.FlagExport) {
|
||||
adminRoute.Get("/export", reqGrafanaAdmin, routing.Wrap(hs.ExportService.HandleGetStatus))
|
||||
adminRoute.Post("/export", reqGrafanaAdmin, routing.Wrap(hs.ExportService.HandleRequestExport))
|
||||
}
|
||||
|
||||
adminRoute.Post("/provisioning/dashboards/reload", authorize(reqGrafanaAdmin, ac.EvalPermission(ActionProvisioningReload, ScopeProvisionersDashboards)), routing.Wrap(hs.AdminProvisioningReloadDashboards))
|
||||
adminRoute.Post("/provisioning/plugins/reload", authorize(reqGrafanaAdmin, ac.EvalPermission(ActionProvisioningReload, ScopeProvisionersPlugins)), routing.Wrap(hs.AdminProvisioningReloadPlugins))
|
||||
adminRoute.Post("/provisioning/datasources/reload", authorize(reqGrafanaAdmin, ac.EvalPermission(ActionProvisioningReload, ScopeProvisionersDatasources)), routing.Wrap(hs.AdminProvisioningReloadDatasources))
|
||||
|
@ -97,7 +97,7 @@ func (hs *HTTPServer) GetDataSourceById(c *models.ReqContext) response.Response
|
||||
return response.Error(404, "Data source not found", err)
|
||||
}
|
||||
|
||||
dto := convertModelToDtos(filtered[0])
|
||||
dto := hs.convertModelToDtos(c.Req.Context(), filtered[0])
|
||||
|
||||
// Add accesscontrol metadata
|
||||
dto.AccessControl = hs.getAccessControlMetadata(c, c.OrgId, datasources.ScopePrefix, dto.UID)
|
||||
@ -128,7 +128,7 @@ func (hs *HTTPServer) DeleteDataSourceById(c *models.ReqContext) response.Respon
|
||||
return response.Error(403, "Cannot delete read-only data source", nil)
|
||||
}
|
||||
|
||||
cmd := &models.DeleteDataSourceCommand{ID: id, OrgID: c.OrgId}
|
||||
cmd := &models.DeleteDataSourceCommand{ID: id, OrgID: c.OrgId, Name: ds.Name}
|
||||
|
||||
err = hs.DataSourcesService.DeleteDataSource(c.Req.Context(), cmd)
|
||||
if err != nil {
|
||||
@ -156,7 +156,7 @@ func (hs *HTTPServer) GetDataSourceByUID(c *models.ReqContext) response.Response
|
||||
return response.Error(404, "Data source not found", err)
|
||||
}
|
||||
|
||||
dto := convertModelToDtos(filtered[0])
|
||||
dto := hs.convertModelToDtos(c.Req.Context(), filtered[0])
|
||||
|
||||
// Add accesscontrol metadata
|
||||
dto.AccessControl = hs.getAccessControlMetadata(c, c.OrgId, datasources.ScopePrefix, dto.UID)
|
||||
@ -184,7 +184,7 @@ func (hs *HTTPServer) DeleteDataSourceByUID(c *models.ReqContext) response.Respo
|
||||
return response.Error(403, "Cannot delete read-only data source", nil)
|
||||
}
|
||||
|
||||
cmd := &models.DeleteDataSourceCommand{UID: uid, OrgID: c.OrgId}
|
||||
cmd := &models.DeleteDataSourceCommand{UID: uid, OrgID: c.OrgId, Name: ds.Name}
|
||||
|
||||
err = hs.DataSourcesService.DeleteDataSource(c.Req.Context(), cmd)
|
||||
if err != nil {
|
||||
@ -265,7 +265,7 @@ func (hs *HTTPServer) AddDataSource(c *models.ReqContext) response.Response {
|
||||
return response.Error(500, "Failed to add datasource", err)
|
||||
}
|
||||
|
||||
ds := convertModelToDtos(cmd.Result)
|
||||
ds := hs.convertModelToDtos(c.Req.Context(), cmd.Result)
|
||||
return response.JSON(http.StatusOK, util.DynMap{
|
||||
"message": "Datasource added",
|
||||
"id": cmd.Result.Id,
|
||||
@ -327,7 +327,7 @@ func (hs *HTTPServer) UpdateDataSource(c *models.ReqContext) response.Response {
|
||||
return response.Error(500, "Failed to query datasource", err)
|
||||
}
|
||||
|
||||
datasourceDTO := convertModelToDtos(query.Result)
|
||||
datasourceDTO := hs.convertModelToDtos(c.Req.Context(), query.Result)
|
||||
|
||||
hs.Live.HandleDatasourceUpdate(c.OrgId, datasourceDTO.UID)
|
||||
|
||||
@ -408,7 +408,7 @@ func (hs *HTTPServer) GetDataSourceByName(c *models.ReqContext) response.Respons
|
||||
return response.Error(404, "Data source not found", err)
|
||||
}
|
||||
|
||||
dto := convertModelToDtos(filtered[0])
|
||||
dto := hs.convertModelToDtos(c.Req.Context(), filtered[0])
|
||||
return response.JSON(http.StatusOK, &dto)
|
||||
}
|
||||
|
||||
@ -457,7 +457,7 @@ func (hs *HTTPServer) CallDatasourceResource(c *models.ReqContext) {
|
||||
hs.callPluginResource(c, plugin.ID, ds.Uid)
|
||||
}
|
||||
|
||||
func convertModelToDtos(ds *models.DataSource) dtos.DataSource {
|
||||
func (hs *HTTPServer) convertModelToDtos(ctx context.Context, ds *models.DataSource) dtos.DataSource {
|
||||
dto := dtos.DataSource{
|
||||
Id: ds.Id,
|
||||
UID: ds.Uid,
|
||||
@ -480,10 +480,15 @@ func convertModelToDtos(ds *models.DataSource) dtos.DataSource {
|
||||
ReadOnly: ds.ReadOnly,
|
||||
}
|
||||
|
||||
for k, v := range ds.SecureJsonData {
|
||||
if len(v) > 0 {
|
||||
dto.SecureJsonFields[k] = true
|
||||
secrets, err := hs.DataSourcesService.DecryptedValues(ctx, ds)
|
||||
if err == nil {
|
||||
for k, v := range secrets {
|
||||
if len(v) > 0 {
|
||||
dto.SecureJsonFields[k] = true
|
||||
}
|
||||
}
|
||||
} else {
|
||||
datasourcesLogger.Debug("Failed to retrieve datasource secrets to parse secure json fields", "error", err)
|
||||
}
|
||||
|
||||
return dto
|
||||
@ -510,7 +515,7 @@ func (hs *HTTPServer) CheckDatasourceHealth(c *models.ReqContext) response.Respo
|
||||
return response.Error(http.StatusInternalServerError, "Unable to find datasource plugin", err)
|
||||
}
|
||||
|
||||
dsInstanceSettings, err := adapters.ModelToInstanceSettings(ds, hs.decryptSecureJsonDataFn())
|
||||
dsInstanceSettings, err := adapters.ModelToInstanceSettings(ds, hs.decryptSecureJsonDataFn(c.Req.Context()))
|
||||
if err != nil {
|
||||
return response.Error(http.StatusInternalServerError, "Unable to get datasource model", err)
|
||||
}
|
||||
@ -561,9 +566,9 @@ func (hs *HTTPServer) CheckDatasourceHealth(c *models.ReqContext) response.Respo
|
||||
return response.JSON(http.StatusOK, payload)
|
||||
}
|
||||
|
||||
func (hs *HTTPServer) decryptSecureJsonDataFn() func(map[string][]byte) map[string]string {
|
||||
return func(m map[string][]byte) map[string]string {
|
||||
decryptedJsonData, err := hs.SecretsService.DecryptJsonData(context.Background(), m)
|
||||
func (hs *HTTPServer) decryptSecureJsonDataFn(ctx context.Context) func(ds *models.DataSource) map[string]string {
|
||||
return func(ds *models.DataSource) map[string]string {
|
||||
decryptedJsonData, err := hs.DataSourcesService.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
hs.log.Error("Failed to decrypt secure json data", "error", err)
|
||||
}
|
||||
|
@ -583,3 +583,8 @@ func (m *dataSourcesServiceMock) UpdateDataSource(ctx context.Context, cmd *mode
|
||||
cmd.Result = m.expectedDatasource
|
||||
return m.expectedError
|
||||
}
|
||||
|
||||
func (m *dataSourcesServiceMock) DecryptedValues(ctx context.Context, ds *models.DataSource) (map[string]string, error) {
|
||||
decryptedValues := make(map[string]string)
|
||||
return decryptedValues, m.expectedError
|
||||
}
|
||||
|
@ -248,9 +248,14 @@ func (hs *HTTPServer) getFSDataSources(c *models.ReqContext, enabledPlugins Enab
|
||||
|
||||
if ds.Access == models.DS_ACCESS_DIRECT {
|
||||
if ds.BasicAuth {
|
||||
password, err := hs.DataSourcesService.DecryptedBasicAuthPassword(c.Req.Context(), ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dsDTO.BasicAuth = util.GetBasicAuthHeader(
|
||||
ds.BasicAuthUser,
|
||||
hs.DataSourcesService.DecryptedBasicAuthPassword(ds),
|
||||
password,
|
||||
)
|
||||
}
|
||||
if ds.WithCredentials {
|
||||
@ -258,14 +263,24 @@ func (hs *HTTPServer) getFSDataSources(c *models.ReqContext, enabledPlugins Enab
|
||||
}
|
||||
|
||||
if ds.Type == models.DS_INFLUXDB_08 {
|
||||
password, err := hs.DataSourcesService.DecryptedPassword(c.Req.Context(), ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dsDTO.Username = ds.User
|
||||
dsDTO.Password = hs.DataSourcesService.DecryptedPassword(ds)
|
||||
dsDTO.Password = password
|
||||
dsDTO.URL = url + "/db/" + ds.Database
|
||||
}
|
||||
|
||||
if ds.Type == models.DS_INFLUXDB {
|
||||
password, err := hs.DataSourcesService.DecryptedPassword(c.Req.Context(), ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dsDTO.Username = ds.User
|
||||
dsDTO.Password = hs.DataSourcesService.DecryptedPassword(ds)
|
||||
dsDTO.Password = password
|
||||
dsDTO.URL = url
|
||||
}
|
||||
}
|
||||
|
@ -39,6 +39,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
"github.com/grafana/grafana/pkg/services/datasources/permissions"
|
||||
"github.com/grafana/grafana/pkg/services/encryption"
|
||||
"github.com/grafana/grafana/pkg/services/export"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/hooks"
|
||||
"github.com/grafana/grafana/pkg/services/ldap"
|
||||
@ -112,6 +113,7 @@ type HTTPServer struct {
|
||||
Live *live.GrafanaLive
|
||||
LivePushGateway *pushhttp.Gateway
|
||||
ThumbService thumbs.Service
|
||||
ExportService export.ExportService
|
||||
StorageService store.HTTPStorageService
|
||||
ContextHandler *contexthandler.ContextHandler
|
||||
SQLStore sqlstore.Store
|
||||
@ -171,7 +173,7 @@ func ProvideHTTPServer(opts ServerOptions, cfg *setting.Cfg, routeRegister routi
|
||||
contextHandler *contexthandler.ContextHandler, features *featuremgmt.FeatureManager,
|
||||
schemaService *schemaloader.SchemaLoaderService, alertNG *ngalert.AlertNG,
|
||||
libraryPanelService librarypanels.Service, libraryElementService libraryelements.Service,
|
||||
quotaService *quota.QuotaService, socialService social.Service, tracer tracing.Tracer,
|
||||
quotaService *quota.QuotaService, socialService social.Service, tracer tracing.Tracer, exportService export.ExportService,
|
||||
encryptionService encryption.Internal, grafanaUpdateChecker *updatechecker.GrafanaService,
|
||||
pluginsUpdateChecker *updatechecker.PluginsService, searchUsersService searchusers.Service,
|
||||
dataSourcesService datasources.DataSourceService, secretsService secrets.Service, queryDataService *query.Service,
|
||||
@ -218,6 +220,7 @@ func ProvideHTTPServer(opts ServerOptions, cfg *setting.Cfg, routeRegister routi
|
||||
AccessControl: accessControl,
|
||||
DataProxy: dataSourceProxy,
|
||||
SearchService: searchService,
|
||||
ExportService: exportService,
|
||||
Live: live,
|
||||
LivePushGateway: livePushGateway,
|
||||
PluginContextProvider: plugCtxProvider,
|
||||
|
@ -9,6 +9,7 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
acmock "github.com/grafana/grafana/pkg/services/accesscontrol/mock"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore/mockstore"
|
||||
|
||||
@ -18,8 +19,11 @@ import (
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
datasources "github.com/grafana/grafana/pkg/services/datasources/service"
|
||||
"github.com/grafana/grafana/pkg/services/query"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/fakes"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
@ -193,13 +197,17 @@ type dashboardFakePluginClient struct {
|
||||
func TestAPIEndpoint_Metrics_QueryMetricsFromDashboard(t *testing.T) {
|
||||
sc := setupHTTPServerWithMockDb(t, false, false)
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
ds := datasources.ProvideService(nil, secretsService, secretsStore, nil, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
setInitCtxSignedInViewer(sc.initCtx)
|
||||
sc.hs.queryDataService = query.ProvideService(
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
&fakePluginRequestValidator{},
|
||||
fakes.NewFakeSecretsService(),
|
||||
ds,
|
||||
&dashboardFakePluginClient{},
|
||||
&fakeOAuthTokenService{},
|
||||
)
|
||||
|
@ -10,7 +10,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/api/response"
|
||||
"github.com/grafana/grafana/pkg/infra/metrics"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/util"
|
||||
"github.com/grafana/grafana/pkg/web"
|
||||
@ -95,7 +94,7 @@ func (hs *HTTPServer) CreateOrg(c *models.ReqContext) response.Response {
|
||||
}
|
||||
|
||||
cmd.UserId = c.UserId
|
||||
if err := sqlstore.CreateOrg(c.Req.Context(), &cmd); err != nil {
|
||||
if err := hs.SQLStore.CreateOrg(c.Req.Context(), &cmd); err != nil {
|
||||
if errors.Is(err, models.ErrOrgNameTaken) {
|
||||
return response.Error(409, "Organization name taken", err)
|
||||
}
|
||||
|
@ -19,7 +19,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
"github.com/grafana/grafana/pkg/services/oauthtoken"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/util"
|
||||
"github.com/grafana/grafana/pkg/util/proxyutil"
|
||||
@ -43,7 +42,6 @@ type DataSourceProxy struct {
|
||||
oAuthTokenService oauthtoken.OAuthTokenService
|
||||
dataSourcesService datasources.DataSourceService
|
||||
tracer tracing.Tracer
|
||||
secretsService secrets.Service
|
||||
}
|
||||
|
||||
type httpClient interface {
|
||||
@ -54,7 +52,7 @@ type httpClient interface {
|
||||
func NewDataSourceProxy(ds *models.DataSource, pluginRoutes []*plugins.Route, ctx *models.ReqContext,
|
||||
proxyPath string, cfg *setting.Cfg, clientProvider httpclient.Provider,
|
||||
oAuthTokenService oauthtoken.OAuthTokenService, dsService datasources.DataSourceService,
|
||||
tracer tracing.Tracer, secretsService secrets.Service) (*DataSourceProxy, error) {
|
||||
tracer tracing.Tracer) (*DataSourceProxy, error) {
|
||||
targetURL, err := datasource.ValidateURL(ds.Type, ds.Url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -71,7 +69,6 @@ func NewDataSourceProxy(ds *models.DataSource, pluginRoutes []*plugins.Route, ct
|
||||
oAuthTokenService: oAuthTokenService,
|
||||
dataSourcesService: dsService,
|
||||
tracer: tracer,
|
||||
secretsService: secretsService,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@ -97,7 +94,7 @@ func (proxy *DataSourceProxy) HandleRequest() {
|
||||
"referer", proxy.ctx.Req.Referer(),
|
||||
)
|
||||
|
||||
transport, err := proxy.dataSourcesService.GetHTTPTransport(proxy.ds, proxy.clientProvider)
|
||||
transport, err := proxy.dataSourcesService.GetHTTPTransport(proxy.ctx.Req.Context(), proxy.ds, proxy.clientProvider)
|
||||
if err != nil {
|
||||
proxy.ctx.JsonApiErr(400, "Unable to load TLS certificate", err)
|
||||
return
|
||||
@ -169,17 +166,28 @@ func (proxy *DataSourceProxy) director(req *http.Request) {
|
||||
|
||||
switch proxy.ds.Type {
|
||||
case models.DS_INFLUXDB_08:
|
||||
password, err := proxy.dataSourcesService.DecryptedPassword(req.Context(), proxy.ds)
|
||||
if err != nil {
|
||||
logger.Error("Error interpolating proxy url", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
req.URL.RawPath = util.JoinURLFragments(proxy.targetUrl.Path, "db/"+proxy.ds.Database+"/"+proxy.proxyPath)
|
||||
reqQueryVals.Add("u", proxy.ds.User)
|
||||
reqQueryVals.Add("p", proxy.dataSourcesService.DecryptedPassword(proxy.ds))
|
||||
reqQueryVals.Add("p", password)
|
||||
req.URL.RawQuery = reqQueryVals.Encode()
|
||||
case models.DS_INFLUXDB:
|
||||
password, err := proxy.dataSourcesService.DecryptedPassword(req.Context(), proxy.ds)
|
||||
if err != nil {
|
||||
logger.Error("Error interpolating proxy url", "error", err)
|
||||
return
|
||||
}
|
||||
req.URL.RawPath = util.JoinURLFragments(proxy.targetUrl.Path, proxy.proxyPath)
|
||||
req.URL.RawQuery = reqQueryVals.Encode()
|
||||
if !proxy.ds.BasicAuth {
|
||||
req.Header.Set(
|
||||
"Authorization",
|
||||
util.GetBasicAuthHeader(proxy.ds.User, proxy.dataSourcesService.DecryptedPassword(proxy.ds)),
|
||||
util.GetBasicAuthHeader(proxy.ds.User, password),
|
||||
)
|
||||
}
|
||||
default:
|
||||
@ -195,8 +203,13 @@ func (proxy *DataSourceProxy) director(req *http.Request) {
|
||||
req.URL.Path = unescapedPath
|
||||
|
||||
if proxy.ds.BasicAuth {
|
||||
password, err := proxy.dataSourcesService.DecryptedBasicAuthPassword(req.Context(), proxy.ds)
|
||||
if err != nil {
|
||||
logger.Error("Error interpolating proxy url", "error", err)
|
||||
return
|
||||
}
|
||||
req.Header.Set("Authorization", util.GetBasicAuthHeader(proxy.ds.BasicAuthUser,
|
||||
proxy.dataSourcesService.DecryptedBasicAuthPassword(proxy.ds)))
|
||||
password))
|
||||
}
|
||||
|
||||
dsAuth := req.Header.Get("X-DS-Authorization")
|
||||
@ -226,23 +239,23 @@ func (proxy *DataSourceProxy) director(req *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
secureJsonData, err := proxy.secretsService.DecryptJsonData(req.Context(), proxy.ds.SecureJsonData)
|
||||
if err != nil {
|
||||
logger.Error("Error interpolating proxy url", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
if proxy.matchedRoute != nil {
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, proxy.matchedRoute, DSInfo{
|
||||
decryptedValues, err := proxy.dataSourcesService.DecryptedValues(req.Context(), proxy.ds)
|
||||
if err != nil {
|
||||
logger.Error("Error interpolating proxy url", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
ApplyRoute(req.Context(), req, proxy.proxyPath, proxy.matchedRoute, DSInfo{
|
||||
ID: proxy.ds.Id,
|
||||
Updated: proxy.ds.Updated,
|
||||
JSONData: jsonData,
|
||||
DecryptedSecureJSONData: secureJsonData,
|
||||
DecryptedSecureJSONData: decryptedValues,
|
||||
}, proxy.cfg)
|
||||
}
|
||||
|
||||
if proxy.oAuthTokenService.IsOAuthPassThruEnabled(proxy.ds) {
|
||||
if token := proxy.oAuthTokenService.GetCurrentOAuthToken(proxy.ctx.Req.Context(), proxy.ctx.SignedInUser); token != nil {
|
||||
if token := proxy.oAuthTokenService.GetCurrentOAuthToken(req.Context(), proxy.ctx.SignedInUser); token != nil {
|
||||
req.Header.Set("Authorization", fmt.Sprintf("%s %s", token.Type(), token.AccessToken))
|
||||
|
||||
idToken, ok := token.Extra("id_token").(string)
|
||||
|
@ -3,6 +3,7 @@ package pluginproxy
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
@ -24,6 +25,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/oauthtoken"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/fakes"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/web"
|
||||
@ -90,6 +92,7 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
})
|
||||
setting.SecretKey = "password" //nolint:goconst
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
key, err := secretsService.Encrypt(context.Background(), []byte("123"), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
@ -128,9 +131,9 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
t.Run("When matching route path", func(t *testing.T) {
|
||||
ctx, req := setUp()
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/v4/some/method", cfg, httpClientProvider,
|
||||
&oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
&oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
proxy.matchedRoute = routes[0]
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, proxy.matchedRoute, dsInfo, cfg)
|
||||
@ -141,8 +144,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
t.Run("When matching route path and has dynamic url", func(t *testing.T) {
|
||||
ctx, req := setUp()
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/common/some/method", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/common/some/method", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
proxy.matchedRoute = routes[3]
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, proxy.matchedRoute, dsInfo, cfg)
|
||||
@ -153,8 +156,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
t.Run("When matching route path with no url", func(t *testing.T) {
|
||||
ctx, req := setUp()
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
proxy.matchedRoute = routes[4]
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, proxy.matchedRoute, dsInfo, cfg)
|
||||
@ -164,8 +167,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
t.Run("When matching route path and has dynamic body", func(t *testing.T) {
|
||||
ctx, req := setUp()
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/body", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/body", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
proxy.matchedRoute = routes[5]
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, proxy.matchedRoute, dsInfo, cfg)
|
||||
@ -178,8 +181,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
t.Run("Validating request", func(t *testing.T) {
|
||||
t.Run("plugin route with valid role", func(t *testing.T) {
|
||||
ctx, _ := setUp()
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/v4/some/method", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/v4/some/method", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
err = proxy.validateRequest()
|
||||
require.NoError(t, err)
|
||||
@ -187,8 +190,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
t.Run("plugin route with admin role and user is editor", func(t *testing.T) {
|
||||
ctx, _ := setUp()
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/admin", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/admin", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
err = proxy.validateRequest()
|
||||
require.Error(t, err)
|
||||
@ -197,8 +200,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
t.Run("plugin route with admin role and user is admin", func(t *testing.T) {
|
||||
ctx, _ := setUp()
|
||||
ctx.SignedInUser.OrgRole = models.ROLE_ADMIN
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/admin", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "api/admin", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
err = proxy.validateRequest()
|
||||
require.NoError(t, err)
|
||||
@ -242,6 +245,7 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
})
|
||||
setting.SecretKey = "password"
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
key, err := secretsService.Encrypt(context.Background(), []byte("123"), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
@ -286,8 +290,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "pathwithtoken1", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "pathwithtoken1", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, routes[0], dsInfo, cfg)
|
||||
|
||||
@ -302,8 +306,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
req, err := http.NewRequest("GET", "http://localhost/asd", nil)
|
||||
require.NoError(t, err)
|
||||
client = newFakeHTTPClient(t, json2)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "pathwithtoken2", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "pathwithtoken2", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, routes[1], dsInfo, cfg)
|
||||
|
||||
@ -319,8 +323,8 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
client = newFakeHTTPClient(t, []byte{})
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "pathwithtoken1", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "pathwithtoken1", cfg, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
ApplyRoute(proxy.ctx.Req.Context(), req, proxy.proxyPath, routes[0], dsInfo, cfg)
|
||||
|
||||
@ -340,9 +344,10 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
ds := &models.DataSource{Url: "htttp://graphite:8080", Type: models.DS_GRAPHITE}
|
||||
ctx := &models.ReqContext{}
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{BuildVersion: "5.3.0"}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{BuildVersion: "5.3.0"}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
req, err := http.NewRequest(http.MethodGet, "http://grafana.com/sub", nil)
|
||||
require.NoError(t, err)
|
||||
@ -366,9 +371,10 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
ctx := &models.ReqContext{}
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://grafana.com/sub", nil)
|
||||
@ -390,9 +396,10 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
ctx := &models.ReqContext{}
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
requestURL, err := url.Parse("http://grafana.com/sub")
|
||||
@ -418,9 +425,10 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
|
||||
ctx := &models.ReqContext{}
|
||||
var pluginRoutes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, pluginRoutes, ctx, "", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, pluginRoutes, ctx, "", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
requestURL, err := url.Parse("http://grafana.com/sub")
|
||||
@ -441,9 +449,10 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
}
|
||||
ctx := &models.ReqContext{}
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/to/folder/", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/to/folder/", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
req, err := http.NewRequest(http.MethodGet, "http://grafana.com/sub", nil)
|
||||
req.Header.Set("Origin", "grafana.com")
|
||||
@ -490,9 +499,10 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
}
|
||||
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/to/folder/", &setting.Cfg{}, httpClientProvider, &mockAuthToken, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/to/folder/", &setting.Cfg{}, httpClientProvider, &mockAuthToken, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
req, err = http.NewRequest(http.MethodGet, "http://grafana.com/sub", nil)
|
||||
require.NoError(t, err)
|
||||
@ -543,24 +553,25 @@ func TestDataSourceProxy_routeRule(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("When proxying data source proxy should handle authentication", func(t *testing.T) {
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
|
||||
tests := []*testCase{
|
||||
createAuthTest(t, secretsService, models.DS_INFLUXDB_08, "http://localhost:9090", authTypePassword, authCheckQuery, false),
|
||||
createAuthTest(t, secretsService, models.DS_INFLUXDB_08, "http://localhost:9090", authTypePassword, authCheckQuery, true),
|
||||
createAuthTest(t, secretsService, models.DS_INFLUXDB, "http://localhost:9090", authTypePassword, authCheckHeader, true),
|
||||
createAuthTest(t, secretsService, models.DS_INFLUXDB, "http://localhost:9090", authTypePassword, authCheckHeader, false),
|
||||
createAuthTest(t, secretsService, models.DS_INFLUXDB, "http://localhost:9090", authTypeBasic, authCheckHeader, true),
|
||||
createAuthTest(t, secretsService, models.DS_INFLUXDB, "http://localhost:9090", authTypeBasic, authCheckHeader, false),
|
||||
createAuthTest(t, secretsStore, models.DS_INFLUXDB_08, "http://localhost:9090", authTypePassword, authCheckQuery, false),
|
||||
createAuthTest(t, secretsStore, models.DS_INFLUXDB_08, "http://localhost:9090", authTypePassword, authCheckQuery, true),
|
||||
createAuthTest(t, secretsStore, models.DS_INFLUXDB, "http://localhost:9090", authTypePassword, authCheckHeader, true),
|
||||
createAuthTest(t, secretsStore, models.DS_INFLUXDB, "http://localhost:9090", authTypePassword, authCheckHeader, false),
|
||||
createAuthTest(t, secretsStore, models.DS_INFLUXDB, "http://localhost:9090", authTypeBasic, authCheckHeader, true),
|
||||
createAuthTest(t, secretsStore, models.DS_INFLUXDB, "http://localhost:9090", authTypeBasic, authCheckHeader, false),
|
||||
|
||||
// These two should be enough for any other datasource at the moment. Proxy has special handling
|
||||
// only for Influx, others have the same path and only BasicAuth. Non BasicAuth datasources
|
||||
// do not go through proxy but through TSDB API which is not tested here.
|
||||
createAuthTest(t, secretsService, models.DS_ES, "http://localhost:9200", authTypeBasic, authCheckHeader, false),
|
||||
createAuthTest(t, secretsService, models.DS_ES, "http://localhost:9200", authTypeBasic, authCheckHeader, true),
|
||||
createAuthTest(t, secretsStore, models.DS_ES, "http://localhost:9200", authTypeBasic, authCheckHeader, false),
|
||||
createAuthTest(t, secretsStore, models.DS_ES, "http://localhost:9200", authTypeBasic, authCheckHeader, true),
|
||||
}
|
||||
for _, test := range tests {
|
||||
runDatasourceAuthTest(t, secretsService, cfg, test)
|
||||
runDatasourceAuthTest(t, secretsService, secretsStore, cfg, test)
|
||||
}
|
||||
})
|
||||
}
|
||||
@ -624,9 +635,10 @@ func TestDataSourceProxy_requestHandling(t *testing.T) {
|
||||
t.Run("When response header Set-Cookie is not set should remove proxied Set-Cookie header", func(t *testing.T) {
|
||||
ctx, ds := setUp(t)
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
proxy.HandleRequest()
|
||||
@ -642,9 +654,10 @@ func TestDataSourceProxy_requestHandling(t *testing.T) {
|
||||
},
|
||||
})
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
proxy.HandleRequest()
|
||||
@ -656,9 +669,10 @@ func TestDataSourceProxy_requestHandling(t *testing.T) {
|
||||
t.Run("When response should set Content-Security-Policy header", func(t *testing.T) {
|
||||
ctx, ds := setUp(t)
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
proxy.HandleRequest()
|
||||
@ -678,9 +692,10 @@ func TestDataSourceProxy_requestHandling(t *testing.T) {
|
||||
},
|
||||
})
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/render", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
proxy.HandleRequest()
|
||||
@ -703,9 +718,10 @@ func TestDataSourceProxy_requestHandling(t *testing.T) {
|
||||
|
||||
ctx.Req = httptest.NewRequest("GET", "/api/datasources/proxy/1/path/%2Ftest%2Ftest%2F?query=%2Ftest%2Ftest%2F", nil)
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/%2Ftest%2Ftest%2F", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/%2Ftest%2Ftest%2F", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
proxy.HandleRequest()
|
||||
@ -727,9 +743,10 @@ func TestDataSourceProxy_requestHandling(t *testing.T) {
|
||||
|
||||
ctx.Req = httptest.NewRequest("GET", "/api/datasources/proxy/1/path/%2Ftest%2Ftest%2F?query=%2Ftest%2Ftest%2F", nil)
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/%2Ftest%2Ftest%2F", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "/path/%2Ftest%2Ftest%2F", &setting.Cfg{}, httpClientProvider, &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
proxy.HandleRequest()
|
||||
@ -752,9 +769,10 @@ func TestNewDataSourceProxy_InvalidURL(t *testing.T) {
|
||||
tracer, err := tracing.InitializeTracerForTest()
|
||||
require.NoError(t, err)
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
_, err = NewDataSourceProxy(&ds, routes, &ctx, "api/method", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
_, err = NewDataSourceProxy(&ds, routes, &ctx, "api/method", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer)
|
||||
require.Error(t, err)
|
||||
assert.True(t, strings.HasPrefix(err.Error(), `validation of data source URL "://host/root" failed`))
|
||||
}
|
||||
@ -773,9 +791,10 @@ func TestNewDataSourceProxy_ProtocolLessURL(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
_, err = NewDataSourceProxy(&ds, routes, &ctx, "api/method", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
_, err = NewDataSourceProxy(&ds, routes, &ctx, "api/method", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer)
|
||||
|
||||
require.NoError(t, err)
|
||||
}
|
||||
@ -816,9 +835,10 @@ func TestNewDataSourceProxy_MSSQL(t *testing.T) {
|
||||
}
|
||||
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
p, err := NewDataSourceProxy(&ds, routes, &ctx, "api/method", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
p, err := NewDataSourceProxy(&ds, routes, &ctx, "api/method", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer)
|
||||
if tc.err == nil {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, &url.URL{
|
||||
@ -843,9 +863,10 @@ func getDatasourceProxiedRequest(t *testing.T, ctx *models.ReqContext, cfg *sett
|
||||
require.NoError(t, err)
|
||||
|
||||
var routes []*plugins.Route
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(ds, routes, ctx, "", cfg, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
req, err := http.NewRequest(http.MethodGet, "http://grafana.com/sub", nil)
|
||||
require.NoError(t, err)
|
||||
@ -897,15 +918,15 @@ const (
|
||||
authCheckHeader = "header"
|
||||
)
|
||||
|
||||
func createAuthTest(t *testing.T, secretsService secrets.Service, dsType string, url string, authType string, authCheck string, useSecureJsonData bool) *testCase {
|
||||
ctx := context.Background()
|
||||
|
||||
func createAuthTest(t *testing.T, secretsStore kvstore.SecretsKVStore, dsType string, url string, authType string, authCheck string, useSecureJsonData bool) *testCase {
|
||||
// Basic user:password
|
||||
base64AuthHeader := "Basic dXNlcjpwYXNzd29yZA=="
|
||||
|
||||
test := &testCase{
|
||||
datasource: &models.DataSource{
|
||||
Id: 1,
|
||||
OrgId: 1,
|
||||
Name: fmt.Sprintf("%s,%s,%s,%s,%t", dsType, url, authType, authCheck, useSecureJsonData),
|
||||
Type: dsType,
|
||||
JsonData: simplejson.New(),
|
||||
Url: url,
|
||||
@ -917,11 +938,13 @@ func createAuthTest(t *testing.T, secretsService secrets.Service, dsType string,
|
||||
message = fmt.Sprintf("%v should add username and password", dsType)
|
||||
test.datasource.User = "user"
|
||||
if useSecureJsonData {
|
||||
test.datasource.SecureJsonData, err = secretsService.EncryptJsonData(
|
||||
ctx,
|
||||
map[string]string{
|
||||
"password": "password",
|
||||
}, secrets.WithoutScope())
|
||||
secureJsonData, err := json.Marshal(map[string]string{
|
||||
"password": "password",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = secretsStore.Set(context.Background(), test.datasource.OrgId, test.datasource.Name, "datasource", string(secureJsonData))
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
test.datasource.Password = "password"
|
||||
}
|
||||
@ -930,11 +953,13 @@ func createAuthTest(t *testing.T, secretsService secrets.Service, dsType string,
|
||||
test.datasource.BasicAuth = true
|
||||
test.datasource.BasicAuthUser = "user"
|
||||
if useSecureJsonData {
|
||||
test.datasource.SecureJsonData, err = secretsService.EncryptJsonData(
|
||||
ctx,
|
||||
map[string]string{
|
||||
"basicAuthPassword": "password",
|
||||
}, secrets.WithoutScope())
|
||||
secureJsonData, err := json.Marshal(map[string]string{
|
||||
"basicAuthPassword": "password",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = secretsStore.Set(context.Background(), test.datasource.OrgId, test.datasource.Name, "datasource", string(secureJsonData))
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
test.datasource.BasicAuthPassword = "password"
|
||||
}
|
||||
@ -962,14 +987,14 @@ func createAuthTest(t *testing.T, secretsService secrets.Service, dsType string,
|
||||
return test
|
||||
}
|
||||
|
||||
func runDatasourceAuthTest(t *testing.T, secretsService secrets.Service, cfg *setting.Cfg, test *testCase) {
|
||||
func runDatasourceAuthTest(t *testing.T, secretsService secrets.Service, secretsStore kvstore.SecretsKVStore, cfg *setting.Cfg, test *testCase) {
|
||||
ctx := &models.ReqContext{}
|
||||
tracer, err := tracing.InitializeTracerForTest()
|
||||
require.NoError(t, err)
|
||||
|
||||
var routes []*plugins.Route
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(test.datasource, routes, ctx, "", &setting.Cfg{}, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(test.datasource, routes, ctx, "", &setting.Cfg{}, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, "http://grafana.com/sub", nil)
|
||||
@ -1010,9 +1035,10 @@ func Test_PathCheck(t *testing.T) {
|
||||
return ctx, req
|
||||
}
|
||||
ctx, _ := setUp()
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(&models.DataSource{}, routes, ctx, "b", &setting.Cfg{}, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer, secretsService)
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
proxy, err := NewDataSourceProxy(&models.DataSource{}, routes, ctx, "b", &setting.Cfg{}, httpclient.NewProvider(), &oauthtoken.Service{}, dsService, tracer)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Nil(t, proxy.validateRequest())
|
||||
|
@ -7,7 +7,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
)
|
||||
|
||||
@ -36,16 +36,16 @@ func IsDataSource(uid string) bool {
|
||||
|
||||
// Service is service representation for expression handling.
|
||||
type Service struct {
|
||||
cfg *setting.Cfg
|
||||
dataService backend.QueryDataHandler
|
||||
secretsService secrets.Service
|
||||
cfg *setting.Cfg
|
||||
dataService backend.QueryDataHandler
|
||||
dataSourceService datasources.DataSourceService
|
||||
}
|
||||
|
||||
func ProvideService(cfg *setting.Cfg, pluginClient plugins.Client, secretsService secrets.Service) *Service {
|
||||
func ProvideService(cfg *setting.Cfg, pluginClient plugins.Client, dataSourceService datasources.DataSourceService) *Service {
|
||||
return &Service{
|
||||
cfg: cfg,
|
||||
dataService: pluginClient,
|
||||
secretsService: secretsService,
|
||||
cfg: cfg,
|
||||
dataService: pluginClient,
|
||||
dataSourceService: dataSourceService,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -11,8 +11,7 @@ import (
|
||||
"github.com/grafana/grafana-plugin-sdk-go/backend"
|
||||
"github.com/grafana/grafana-plugin-sdk-go/data"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/fakes"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
datasources "github.com/grafana/grafana/pkg/services/datasources/fakes"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
@ -28,12 +27,10 @@ func TestService(t *testing.T) {
|
||||
|
||||
cfg := setting.NewCfg()
|
||||
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
|
||||
s := Service{
|
||||
cfg: cfg,
|
||||
dataService: me,
|
||||
secretsService: secretsService,
|
||||
cfg: cfg,
|
||||
dataService: me,
|
||||
dataSourceService: &datasources.FakeDataSourceService{},
|
||||
}
|
||||
|
||||
queries := []Query{
|
||||
|
@ -126,9 +126,9 @@ func hiddenRefIDs(queries []Query) (map[string]struct{}, error) {
|
||||
return hidden, nil
|
||||
}
|
||||
|
||||
func (s *Service) decryptSecureJsonDataFn(ctx context.Context) func(map[string][]byte) map[string]string {
|
||||
return func(m map[string][]byte) map[string]string {
|
||||
decryptedJsonData, err := s.secretsService.DecryptJsonData(ctx, m)
|
||||
func (s *Service) decryptSecureJsonDataFn(ctx context.Context) func(ds *models.DataSource) map[string]string {
|
||||
return func(ds *models.DataSource) map[string]string {
|
||||
decryptedJsonData, err := s.dataSourceService.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
logger.Error("Failed to decrypt secure json data", "error", err)
|
||||
}
|
||||
|
@ -60,7 +60,7 @@ func (s *Service) detectPrometheusVariant(ctx context.Context, ds *models.DataSo
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
c, err := s.datasources.GetHTTPTransport(ds, s.httpClientProvider)
|
||||
c, err := s.datasources.GetHTTPTransport(ctx, ds, s.httpClientProvider)
|
||||
if err != nil {
|
||||
s.log.Error("Failed to get HTTP client for Prometheus data source", "error", err)
|
||||
return "", err
|
||||
|
@ -395,6 +395,6 @@ func (s mockDatasourceService) GetDataSourcesByType(ctx context.Context, query *
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s mockDatasourceService) GetHTTPTransport(ds *models.DataSource, provider httpclient.Provider, customMiddlewares ...sdkhttpclient.Middleware) (http.RoundTripper, error) {
|
||||
func (s mockDatasourceService) GetHTTPTransport(ctx context.Context, ds *models.DataSource, provider httpclient.Provider, customMiddlewares ...sdkhttpclient.Middleware) (http.RoundTripper, error) {
|
||||
return provider.GetTransport()
|
||||
}
|
||||
|
@ -9,7 +9,7 @@ import (
|
||||
)
|
||||
|
||||
// ModelToInstanceSettings converts a models.DataSource to a backend.DataSourceInstanceSettings.
|
||||
func ModelToInstanceSettings(ds *models.DataSource, decryptFn func(map[string][]byte) map[string]string,
|
||||
func ModelToInstanceSettings(ds *models.DataSource, decryptFn func(ds *models.DataSource) map[string]string,
|
||||
) (*backend.DataSourceInstanceSettings, error) {
|
||||
var jsonDataBytes json.RawMessage
|
||||
if ds.JsonData != nil {
|
||||
@ -30,7 +30,7 @@ func ModelToInstanceSettings(ds *models.DataSource, decryptFn func(map[string][]
|
||||
BasicAuthEnabled: ds.BasicAuth,
|
||||
BasicAuthUser: ds.BasicAuthUser,
|
||||
JSONData: jsonDataBytes,
|
||||
DecryptedSecureJSONData: decryptFn(ds.SecureJsonData),
|
||||
DecryptedSecureJSONData: decryptFn(ds),
|
||||
Updated: ds.Updated,
|
||||
}, nil
|
||||
}
|
||||
|
@ -15,18 +15,17 @@ import (
|
||||
"github.com/grafana/grafana/pkg/plugins/adapters"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
"github.com/grafana/grafana/pkg/services/pluginsettings"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/util/errutil"
|
||||
)
|
||||
|
||||
func ProvideService(cacheService *localcache.CacheService, pluginStore plugins.Store,
|
||||
dataSourceCache datasources.CacheService, secretsService secrets.Service,
|
||||
dataSourceCache datasources.CacheService, dataSourceService datasources.DataSourceService,
|
||||
pluginSettingsService pluginsettings.Service) *Provider {
|
||||
return &Provider{
|
||||
cacheService: cacheService,
|
||||
pluginStore: pluginStore,
|
||||
dataSourceCache: dataSourceCache,
|
||||
secretsService: secretsService,
|
||||
dataSourceService: dataSourceService,
|
||||
pluginSettingsService: pluginSettingsService,
|
||||
logger: log.New("plugincontext"),
|
||||
}
|
||||
@ -36,7 +35,7 @@ type Provider struct {
|
||||
cacheService *localcache.CacheService
|
||||
pluginStore plugins.Store
|
||||
dataSourceCache datasources.CacheService
|
||||
secretsService secrets.Service
|
||||
dataSourceService datasources.DataSourceService
|
||||
pluginSettingsService pluginsettings.Service
|
||||
logger log.Logger
|
||||
}
|
||||
@ -87,7 +86,7 @@ func (p *Provider) Get(ctx context.Context, pluginID string, datasourceUID strin
|
||||
if err != nil {
|
||||
return pc, false, errutil.Wrap("Failed to get datasource", err)
|
||||
}
|
||||
datasourceSettings, err := adapters.ModelToInstanceSettings(ds, p.decryptSecureJsonDataFn())
|
||||
datasourceSettings, err := adapters.ModelToInstanceSettings(ds, p.decryptSecureJsonDataFn(ctx))
|
||||
if err != nil {
|
||||
return pc, false, errutil.Wrap("Failed to convert datasource", err)
|
||||
}
|
||||
@ -122,9 +121,9 @@ func (p *Provider) getCachedPluginSettings(ctx context.Context, pluginID string,
|
||||
return ps, nil
|
||||
}
|
||||
|
||||
func (p *Provider) decryptSecureJsonDataFn() func(map[string][]byte) map[string]string {
|
||||
return func(m map[string][]byte) map[string]string {
|
||||
decryptedJsonData, err := p.secretsService.DecryptJsonData(context.Background(), m)
|
||||
func (p *Provider) decryptSecureJsonDataFn(ctx context.Context) func(ds *models.DataSource) map[string]string {
|
||||
return func(ds *models.DataSource) map[string]string {
|
||||
decryptedJsonData, err := p.dataSourceService.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
p.logger.Error("Failed to decrypt secure json data", "error", err)
|
||||
}
|
||||
|
@ -49,6 +49,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/datasourceproxy"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
datasourceservice "github.com/grafana/grafana/pkg/services/datasources/service"
|
||||
"github.com/grafana/grafana/pkg/services/export"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/guardian"
|
||||
"github.com/grafana/grafana/pkg/services/hooks"
|
||||
@ -78,6 +79,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/searchV2"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
secretsDatabase "github.com/grafana/grafana/pkg/services/secrets/database"
|
||||
secretsStore "github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
"github.com/grafana/grafana/pkg/services/serviceaccounts"
|
||||
serviceaccountsmanager "github.com/grafana/grafana/pkg/services/serviceaccounts/manager"
|
||||
@ -174,6 +176,7 @@ var wireBasicSet = wire.NewSet(
|
||||
searchV2.ProvideService,
|
||||
store.ProvideService,
|
||||
store.ProvideHTTPService,
|
||||
export.ProvideService,
|
||||
live.ProvideService,
|
||||
pushhttp.ProvideService,
|
||||
plugincontext.ProvideService,
|
||||
@ -239,6 +242,7 @@ var wireBasicSet = wire.NewSet(
|
||||
wire.Bind(new(alerting.DashAlertExtractor), new(*alerting.DashAlertExtractorService)),
|
||||
comments.ProvideService,
|
||||
guardian.ProvideService,
|
||||
secretsStore.ProvideService,
|
||||
avatar.ProvideAvatarCacheServer,
|
||||
authproxy.ProvideAuthProxy,
|
||||
statscollector.ProvideService,
|
||||
|
@ -115,7 +115,7 @@ func (p *DataSourceProxyService) proxyDatasourceRequest(c *models.ReqContext, ds
|
||||
|
||||
proxyPath := getProxyPath(c)
|
||||
proxy, err := pluginproxy.NewDataSourceProxy(ds, plugin.Routes, c, proxyPath, p.Cfg, p.HTTPClientProvider,
|
||||
p.OAuthTokenService, p.DataSourcesService, p.tracer, p.secretsService)
|
||||
p.OAuthTokenService, p.DataSourcesService, p.tracer)
|
||||
if err != nil {
|
||||
if errors.Is(err, datasource.URLValidationError{}) {
|
||||
c.JsonApiErr(http.StatusBadRequest, fmt.Sprintf("Invalid data source URL: %q", ds.Url), err)
|
||||
|
@ -33,23 +33,23 @@ type DataSourceService interface {
|
||||
GetDefaultDataSource(ctx context.Context, query *models.GetDefaultDataSourceQuery) error
|
||||
|
||||
// GetHTTPTransport gets a datasource specific HTTP transport.
|
||||
GetHTTPTransport(ds *models.DataSource, provider httpclient.Provider, customMiddlewares ...sdkhttpclient.Middleware) (http.RoundTripper, error)
|
||||
GetHTTPTransport(ctx context.Context, ds *models.DataSource, provider httpclient.Provider, customMiddlewares ...sdkhttpclient.Middleware) (http.RoundTripper, error)
|
||||
|
||||
// DecryptedValues decrypts the encrypted secureJSONData of the provided datasource and
|
||||
// returns the decrypted values.
|
||||
DecryptedValues(ds *models.DataSource) map[string]string
|
||||
DecryptedValues(ctx context.Context, ds *models.DataSource) (map[string]string, error)
|
||||
|
||||
// DecryptedValue decrypts the encrypted datasource secureJSONData identified by key
|
||||
// and returns the decryped value.
|
||||
DecryptedValue(ds *models.DataSource, key string) (string, bool)
|
||||
DecryptedValue(ctx context.Context, ds *models.DataSource, key string) (string, bool, error)
|
||||
|
||||
// DecryptedBasicAuthPassword decrypts the encrypted datasource basic authentication
|
||||
// password and returns the decryped value.
|
||||
DecryptedBasicAuthPassword(ds *models.DataSource) string
|
||||
DecryptedBasicAuthPassword(ctx context.Context, ds *models.DataSource) (string, error)
|
||||
|
||||
// DecryptedPassword decrypts the encrypted datasource password and returns the
|
||||
// decryped value.
|
||||
DecryptedPassword(ds *models.DataSource) string
|
||||
DecryptedPassword(ctx context.Context, ds *models.DataSource) (string, error)
|
||||
}
|
||||
|
||||
// CacheService interface for retrieving a cached datasource.
|
||||
|
@ -4,13 +4,14 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
)
|
||||
|
||||
type FakeCacheService struct {
|
||||
DataSources []*models.DataSource
|
||||
}
|
||||
|
||||
var _ CacheService = &FakeCacheService{}
|
||||
var _ datasources.CacheService = &FakeCacheService{}
|
||||
|
||||
func (c *FakeCacheService) GetDatasource(ctx context.Context, datasourceID int64, user *models.SignedInUser, skipCache bool) (*models.DataSource, error) {
|
||||
for _, datasource := range c.DataSources {
|
124
pkg/services/datasources/fakes/fake_datasource_service.go
Normal file
124
pkg/services/datasources/fakes/fake_datasource_service.go
Normal file
@ -0,0 +1,124 @@
|
||||
package datasources
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
|
||||
sdkhttpclient "github.com/grafana/grafana-plugin-sdk-go/backend/httpclient"
|
||||
"github.com/grafana/grafana/pkg/infra/httpclient"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
)
|
||||
|
||||
type FakeDataSourceService struct {
|
||||
lastId int64
|
||||
DataSources []*models.DataSource
|
||||
}
|
||||
|
||||
var _ datasources.DataSourceService = &FakeDataSourceService{}
|
||||
|
||||
func (s *FakeDataSourceService) GetDataSource(ctx context.Context, query *models.GetDataSourceQuery) error {
|
||||
for _, datasource := range s.DataSources {
|
||||
idMatch := query.Id != 0 && query.Id == datasource.Id
|
||||
uidMatch := query.Uid != "" && query.Uid == datasource.Uid
|
||||
nameMatch := query.Name != "" && query.Name == datasource.Name
|
||||
if idMatch || nameMatch || uidMatch {
|
||||
query.Result = datasource
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return models.ErrDataSourceNotFound
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) GetDataSources(ctx context.Context, query *models.GetDataSourcesQuery) error {
|
||||
for _, datasource := range s.DataSources {
|
||||
orgMatch := query.OrgId != 0 && query.OrgId == datasource.OrgId
|
||||
if orgMatch {
|
||||
query.Result = append(query.Result, datasource)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) GetDataSourcesByType(ctx context.Context, query *models.GetDataSourcesByTypeQuery) error {
|
||||
for _, datasource := range s.DataSources {
|
||||
typeMatch := query.Type != "" && query.Type == datasource.Type
|
||||
if typeMatch {
|
||||
query.Result = append(query.Result, datasource)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) AddDataSource(ctx context.Context, cmd *models.AddDataSourceCommand) error {
|
||||
if s.lastId == 0 {
|
||||
s.lastId = int64(len(s.DataSources) - 1)
|
||||
}
|
||||
cmd.Result = &models.DataSource{
|
||||
Id: s.lastId + 1,
|
||||
Name: cmd.Name,
|
||||
Type: cmd.Type,
|
||||
Uid: cmd.Uid,
|
||||
OrgId: cmd.OrgId,
|
||||
}
|
||||
s.DataSources = append(s.DataSources, cmd.Result)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) DeleteDataSource(ctx context.Context, cmd *models.DeleteDataSourceCommand) error {
|
||||
for i, datasource := range s.DataSources {
|
||||
idMatch := cmd.ID != 0 && cmd.ID == datasource.Id
|
||||
uidMatch := cmd.UID != "" && cmd.UID == datasource.Uid
|
||||
nameMatch := cmd.Name != "" && cmd.Name == datasource.Name
|
||||
if idMatch || nameMatch || uidMatch {
|
||||
s.DataSources = append(s.DataSources[:i], s.DataSources[i+1:]...)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return models.ErrDataSourceNotFound
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) UpdateDataSource(ctx context.Context, cmd *models.UpdateDataSourceCommand) error {
|
||||
for _, datasource := range s.DataSources {
|
||||
idMatch := cmd.Id != 0 && cmd.Id == datasource.Id
|
||||
uidMatch := cmd.Uid != "" && cmd.Uid == datasource.Uid
|
||||
nameMatch := cmd.Name != "" && cmd.Name == datasource.Name
|
||||
if idMatch || nameMatch || uidMatch {
|
||||
if cmd.Name != "" {
|
||||
datasource.Name = cmd.Name
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return models.ErrDataSourceNotFound
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) GetDefaultDataSource(ctx context.Context, query *models.GetDefaultDataSourceQuery) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) GetHTTPTransport(ctx context.Context, ds *models.DataSource, provider httpclient.Provider, customMiddlewares ...sdkhttpclient.Middleware) (http.RoundTripper, error) {
|
||||
rt, err := provider.GetTransport(sdkhttpclient.Options{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return rt, nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) DecryptedValues(ctx context.Context, ds *models.DataSource) (map[string]string, error) {
|
||||
values := make(map[string]string)
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) DecryptedValue(ctx context.Context, ds *models.DataSource, key string) (string, bool, error) {
|
||||
return "", false, nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) DecryptedBasicAuthPassword(ctx context.Context, ds *models.DataSource) (string, error) {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func (s *FakeDataSourceService) DecryptedPassword(ctx context.Context, ds *models.DataSource) (string, error) {
|
||||
return "", nil
|
||||
}
|
@ -3,6 +3,7 @@ package service
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@ -23,20 +24,21 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
)
|
||||
|
||||
type Service struct {
|
||||
SQLStore *sqlstore.SQLStore
|
||||
SecretsStore kvstore.SecretsKVStore
|
||||
SecretsService secrets.Service
|
||||
cfg *setting.Cfg
|
||||
features featuremgmt.FeatureToggles
|
||||
permissionsService accesscontrol.PermissionsService
|
||||
ac accesscontrol.AccessControl
|
||||
|
||||
ptc proxyTransportCache
|
||||
dsDecryptionCache secureJSONDecryptionCache
|
||||
ptc proxyTransportCache
|
||||
}
|
||||
|
||||
type proxyTransportCache struct {
|
||||
@ -49,29 +51,17 @@ type cachedRoundTripper struct {
|
||||
roundTripper http.RoundTripper
|
||||
}
|
||||
|
||||
type secureJSONDecryptionCache struct {
|
||||
cache map[int64]cachedDecryptedJSON
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
type cachedDecryptedJSON struct {
|
||||
updated time.Time
|
||||
json map[string]string
|
||||
}
|
||||
|
||||
func ProvideService(
|
||||
store *sqlstore.SQLStore, secretsService secrets.Service, cfg *setting.Cfg, features featuremgmt.FeatureToggles,
|
||||
ac accesscontrol.AccessControl, permissionsServices accesscontrol.PermissionsServices,
|
||||
store *sqlstore.SQLStore, secretsService secrets.Service, secretsStore kvstore.SecretsKVStore, cfg *setting.Cfg,
|
||||
features featuremgmt.FeatureToggles, ac accesscontrol.AccessControl, permissionsServices accesscontrol.PermissionsServices,
|
||||
) *Service {
|
||||
s := &Service{
|
||||
SQLStore: store,
|
||||
SecretsStore: secretsStore,
|
||||
SecretsService: secretsService,
|
||||
ptc: proxyTransportCache{
|
||||
cache: make(map[int64]cachedRoundTripper),
|
||||
},
|
||||
dsDecryptionCache: secureJSONDecryptionCache{
|
||||
cache: make(map[int64]cachedDecryptedJSON),
|
||||
},
|
||||
cfg: cfg,
|
||||
features: features,
|
||||
permissionsService: permissionsServices.GetDataSourceService(),
|
||||
@ -90,6 +80,8 @@ type DataSourceRetriever interface {
|
||||
GetDataSource(ctx context.Context, query *models.GetDataSourceQuery) error
|
||||
}
|
||||
|
||||
const secretType = "datasource"
|
||||
|
||||
// NewNameScopeResolver provides an AttributeScopeResolver able to
|
||||
// translate a scope prefixed with "datasources:name:" into an uid based scope.
|
||||
func NewNameScopeResolver(db DataSourceRetriever) (string, accesscontrol.AttributeScopeResolveFunc) {
|
||||
@ -155,12 +147,17 @@ func (s *Service) GetDataSourcesByType(ctx context.Context, query *models.GetDat
|
||||
|
||||
func (s *Service) AddDataSource(ctx context.Context, cmd *models.AddDataSourceCommand) error {
|
||||
var err error
|
||||
cmd.EncryptedSecureJsonData, err = s.SecretsService.EncryptJsonData(ctx, cmd.SecureJsonData, secrets.WithoutScope())
|
||||
if err := s.SQLStore.AddDataSource(ctx, cmd); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
secret, err := json.Marshal(cmd.SecureJsonData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := s.SQLStore.AddDataSource(ctx, cmd); err != nil {
|
||||
err = s.SecretsStore.Set(ctx, cmd.OrgId, cmd.Name, secretType, string(secret))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -186,25 +183,50 @@ func (s *Service) AddDataSource(ctx context.Context, cmd *models.AddDataSourceCo
|
||||
}
|
||||
|
||||
func (s *Service) DeleteDataSource(ctx context.Context, cmd *models.DeleteDataSourceCommand) error {
|
||||
return s.SQLStore.DeleteDataSource(ctx, cmd)
|
||||
err := s.SQLStore.DeleteDataSource(ctx, cmd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return s.SecretsStore.Del(ctx, cmd.OrgID, cmd.Name, secretType)
|
||||
}
|
||||
|
||||
func (s *Service) UpdateDataSource(ctx context.Context, cmd *models.UpdateDataSourceCommand) error {
|
||||
var err error
|
||||
cmd.EncryptedSecureJsonData, err = s.SecretsService.EncryptJsonData(ctx, cmd.SecureJsonData, secrets.WithoutScope())
|
||||
secret, err := json.Marshal(cmd.SecureJsonData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.SQLStore.UpdateDataSource(ctx, cmd)
|
||||
query := &models.GetDataSourceQuery{
|
||||
Id: cmd.Id,
|
||||
OrgId: cmd.OrgId,
|
||||
}
|
||||
err = s.SQLStore.GetDataSource(ctx, query)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = s.SQLStore.UpdateDataSource(ctx, cmd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if query.Result.Name != cmd.Name {
|
||||
err = s.SecretsStore.Rename(ctx, cmd.OrgId, query.Result.Name, secretType, cmd.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return s.SecretsStore.Set(ctx, cmd.OrgId, cmd.Name, secretType, string(secret))
|
||||
}
|
||||
|
||||
func (s *Service) GetDefaultDataSource(ctx context.Context, query *models.GetDefaultDataSourceQuery) error {
|
||||
return s.SQLStore.GetDefaultDataSource(ctx, query)
|
||||
}
|
||||
|
||||
func (s *Service) GetHTTPClient(ds *models.DataSource, provider httpclient.Provider) (*http.Client, error) {
|
||||
transport, err := s.GetHTTPTransport(ds, provider)
|
||||
func (s *Service) GetHTTPClient(ctx context.Context, ds *models.DataSource, provider httpclient.Provider) (*http.Client, error) {
|
||||
transport, err := s.GetHTTPTransport(ctx, ds, provider)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -215,7 +237,7 @@ func (s *Service) GetHTTPClient(ds *models.DataSource, provider httpclient.Provi
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *Service) GetHTTPTransport(ds *models.DataSource, provider httpclient.Provider,
|
||||
func (s *Service) GetHTTPTransport(ctx context.Context, ds *models.DataSource, provider httpclient.Provider,
|
||||
customMiddlewares ...sdkhttpclient.Middleware) (http.RoundTripper, error) {
|
||||
s.ptc.Lock()
|
||||
defer s.ptc.Unlock()
|
||||
@ -224,7 +246,7 @@ func (s *Service) GetHTTPTransport(ds *models.DataSource, provider httpclient.Pr
|
||||
return t.roundTripper, nil
|
||||
}
|
||||
|
||||
opts, err := s.httpClientOptions(ds)
|
||||
opts, err := s.httpClientOptions(ctx, ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -244,58 +266,84 @@ func (s *Service) GetHTTPTransport(ds *models.DataSource, provider httpclient.Pr
|
||||
return rt, nil
|
||||
}
|
||||
|
||||
func (s *Service) GetTLSConfig(ds *models.DataSource, httpClientProvider httpclient.Provider) (*tls.Config, error) {
|
||||
opts, err := s.httpClientOptions(ds)
|
||||
func (s *Service) GetTLSConfig(ctx context.Context, ds *models.DataSource, httpClientProvider httpclient.Provider) (*tls.Config, error) {
|
||||
opts, err := s.httpClientOptions(ctx, ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return httpClientProvider.GetTLSConfig(*opts)
|
||||
}
|
||||
|
||||
func (s *Service) DecryptedValues(ds *models.DataSource) map[string]string {
|
||||
s.dsDecryptionCache.Lock()
|
||||
defer s.dsDecryptionCache.Unlock()
|
||||
|
||||
if item, present := s.dsDecryptionCache.cache[ds.Id]; present && ds.Updated.Equal(item.updated) {
|
||||
return item.json
|
||||
}
|
||||
|
||||
json, err := s.SecretsService.DecryptJsonData(context.Background(), ds.SecureJsonData)
|
||||
func (s *Service) DecryptedValues(ctx context.Context, ds *models.DataSource) (map[string]string, error) {
|
||||
decryptedValues := make(map[string]string)
|
||||
secret, exist, err := s.SecretsStore.Get(ctx, ds.OrgId, ds.Name, secretType)
|
||||
if err != nil {
|
||||
return map[string]string{}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.dsDecryptionCache.cache[ds.Id] = cachedDecryptedJSON{
|
||||
updated: ds.Updated,
|
||||
json: json,
|
||||
if exist {
|
||||
err := json.Unmarshal([]byte(secret), &decryptedValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if len(ds.SecureJsonData) > 0 {
|
||||
decryptedValues, err = s.MigrateSecrets(ctx, ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return json
|
||||
return decryptedValues, nil
|
||||
}
|
||||
|
||||
func (s *Service) DecryptedValue(ds *models.DataSource, key string) (string, bool) {
|
||||
value, exists := s.DecryptedValues(ds)[key]
|
||||
return value, exists
|
||||
}
|
||||
|
||||
func (s *Service) DecryptedBasicAuthPassword(ds *models.DataSource) string {
|
||||
if value, ok := s.DecryptedValue(ds, "basicAuthPassword"); ok {
|
||||
return value
|
||||
func (s *Service) MigrateSecrets(ctx context.Context, ds *models.DataSource) (map[string]string, error) {
|
||||
secureJsonData, err := s.SecretsService.DecryptJsonData(ctx, ds.SecureJsonData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ds.BasicAuthPassword
|
||||
}
|
||||
|
||||
func (s *Service) DecryptedPassword(ds *models.DataSource) string {
|
||||
if value, ok := s.DecryptedValue(ds, "password"); ok {
|
||||
return value
|
||||
jsonData, err := json.Marshal(secureJsonData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ds.Password
|
||||
err = s.SecretsStore.Set(ctx, ds.OrgId, ds.Name, secretType, string(jsonData))
|
||||
return secureJsonData, err
|
||||
}
|
||||
|
||||
func (s *Service) httpClientOptions(ds *models.DataSource) (*sdkhttpclient.Options, error) {
|
||||
tlsOptions := s.dsTLSOptions(ds)
|
||||
func (s *Service) DecryptedValue(ctx context.Context, ds *models.DataSource, key string) (string, bool, error) {
|
||||
values, err := s.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
return "", false, err
|
||||
}
|
||||
value, exists := values[key]
|
||||
return value, exists, nil
|
||||
}
|
||||
|
||||
func (s *Service) DecryptedBasicAuthPassword(ctx context.Context, ds *models.DataSource) (string, error) {
|
||||
value, ok, err := s.DecryptedValue(ctx, ds, "basicAuthPassword")
|
||||
if ok {
|
||||
return value, nil
|
||||
}
|
||||
|
||||
return ds.BasicAuthPassword, err
|
||||
}
|
||||
|
||||
func (s *Service) DecryptedPassword(ctx context.Context, ds *models.DataSource) (string, error) {
|
||||
value, ok, err := s.DecryptedValue(ctx, ds, "password")
|
||||
if ok {
|
||||
return value, nil
|
||||
}
|
||||
|
||||
return ds.Password, err
|
||||
}
|
||||
|
||||
func (s *Service) httpClientOptions(ctx context.Context, ds *models.DataSource) (*sdkhttpclient.Options, error) {
|
||||
tlsOptions, err := s.dsTLSOptions(ctx, ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
timeouts := &sdkhttpclient.TimeoutOptions{
|
||||
Timeout: s.getTimeout(ds),
|
||||
DialTimeout: sdkhttpclient.DefaultTimeoutOptions.DialTimeout,
|
||||
@ -307,9 +355,15 @@ func (s *Service) httpClientOptions(ds *models.DataSource) (*sdkhttpclient.Optio
|
||||
MaxIdleConnsPerHost: sdkhttpclient.DefaultTimeoutOptions.MaxIdleConnsPerHost,
|
||||
IdleConnTimeout: sdkhttpclient.DefaultTimeoutOptions.IdleConnTimeout,
|
||||
}
|
||||
|
||||
decryptedValues, err := s.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
opts := &sdkhttpclient.Options{
|
||||
Timeouts: timeouts,
|
||||
Headers: s.getCustomHeaders(ds.JsonData, s.DecryptedValues(ds)),
|
||||
Headers: s.getCustomHeaders(ds.JsonData, decryptedValues),
|
||||
Labels: map[string]string{
|
||||
"datasource_name": ds.Name,
|
||||
"datasource_uid": ds.Uid,
|
||||
@ -320,22 +374,30 @@ func (s *Service) httpClientOptions(ds *models.DataSource) (*sdkhttpclient.Optio
|
||||
if ds.JsonData != nil {
|
||||
opts.CustomOptions = ds.JsonData.MustMap()
|
||||
}
|
||||
|
||||
if ds.BasicAuth {
|
||||
password, err := s.DecryptedBasicAuthPassword(ctx, ds)
|
||||
if err != nil {
|
||||
return opts, err
|
||||
}
|
||||
|
||||
opts.BasicAuth = &sdkhttpclient.BasicAuthOptions{
|
||||
User: ds.BasicAuthUser,
|
||||
Password: s.DecryptedBasicAuthPassword(ds),
|
||||
Password: password,
|
||||
}
|
||||
} else if ds.User != "" {
|
||||
password, err := s.DecryptedPassword(ctx, ds)
|
||||
if err != nil {
|
||||
return opts, err
|
||||
}
|
||||
|
||||
opts.BasicAuth = &sdkhttpclient.BasicAuthOptions{
|
||||
User: ds.User,
|
||||
Password: s.DecryptedPassword(ds),
|
||||
Password: password,
|
||||
}
|
||||
}
|
||||
|
||||
// Azure authentication
|
||||
if ds.JsonData != nil && s.features.IsEnabled(featuremgmt.FlagHttpclientproviderAzureAuth) {
|
||||
credentials, err := azcredentials.FromDatasourceData(ds.JsonData.MustMap(), s.DecryptedValues(ds))
|
||||
credentials, err := azcredentials.FromDatasourceData(ds.JsonData.MustMap(), decryptedValues)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("invalid Azure credentials: %s", err)
|
||||
return nil, err
|
||||
@ -371,19 +433,27 @@ func (s *Service) httpClientOptions(ds *models.DataSource) (*sdkhttpclient.Optio
|
||||
Profile: ds.JsonData.Get("sigV4Profile").MustString(),
|
||||
}
|
||||
|
||||
if val, exists := s.DecryptedValue(ds, "sigV4AccessKey"); exists {
|
||||
opts.SigV4.AccessKey = val
|
||||
if val, exists, err := s.DecryptedValue(ctx, ds, "sigV4AccessKey"); err == nil {
|
||||
if exists {
|
||||
opts.SigV4.AccessKey = val
|
||||
}
|
||||
} else {
|
||||
return opts, err
|
||||
}
|
||||
|
||||
if val, exists := s.DecryptedValue(ds, "sigV4SecretKey"); exists {
|
||||
opts.SigV4.SecretKey = val
|
||||
if val, exists, err := s.DecryptedValue(ctx, ds, "sigV4SecretKey"); err == nil {
|
||||
if exists {
|
||||
opts.SigV4.SecretKey = val
|
||||
}
|
||||
} else {
|
||||
return opts, err
|
||||
}
|
||||
}
|
||||
|
||||
return opts, nil
|
||||
}
|
||||
|
||||
func (s *Service) dsTLSOptions(ds *models.DataSource) sdkhttpclient.TLSOptions {
|
||||
func (s *Service) dsTLSOptions(ctx context.Context, ds *models.DataSource) (sdkhttpclient.TLSOptions, error) {
|
||||
var tlsSkipVerify, tlsClientAuth, tlsAuthWithCACert bool
|
||||
var serverName string
|
||||
|
||||
@ -401,22 +471,35 @@ func (s *Service) dsTLSOptions(ds *models.DataSource) sdkhttpclient.TLSOptions {
|
||||
|
||||
if tlsClientAuth || tlsAuthWithCACert {
|
||||
if tlsAuthWithCACert {
|
||||
if val, exists := s.DecryptedValue(ds, "tlsCACert"); exists && len(val) > 0 {
|
||||
opts.CACertificate = val
|
||||
if val, exists, err := s.DecryptedValue(ctx, ds, "tlsCACert"); err == nil {
|
||||
if exists && len(val) > 0 {
|
||||
opts.CACertificate = val
|
||||
}
|
||||
} else {
|
||||
return opts, err
|
||||
}
|
||||
}
|
||||
|
||||
if tlsClientAuth {
|
||||
if val, exists := s.DecryptedValue(ds, "tlsClientCert"); exists && len(val) > 0 {
|
||||
opts.ClientCertificate = val
|
||||
if val, exists, err := s.DecryptedValue(ctx, ds, "tlsClientCert"); err == nil {
|
||||
fmt.Print("\n\n\n\n", val, exists, err, "\n\n\n\n")
|
||||
if exists && len(val) > 0 {
|
||||
opts.ClientCertificate = val
|
||||
}
|
||||
} else {
|
||||
return opts, err
|
||||
}
|
||||
if val, exists := s.DecryptedValue(ds, "tlsClientKey"); exists && len(val) > 0 {
|
||||
opts.ClientKey = val
|
||||
if val, exists, err := s.DecryptedValue(ctx, ds, "tlsClientKey"); err == nil {
|
||||
if exists && len(val) > 0 {
|
||||
opts.ClientKey = val
|
||||
}
|
||||
} else {
|
||||
return opts, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return opts
|
||||
return opts, nil
|
||||
}
|
||||
|
||||
func (s *Service) getTimeout(ds *models.DataSource) time.Duration {
|
||||
|
@ -2,6 +2,7 @@ package service
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
@ -10,6 +11,8 @@ import (
|
||||
|
||||
"github.com/grafana/grafana-azure-sdk-go/azsettings"
|
||||
sdkhttpclient "github.com/grafana/grafana-plugin-sdk-go/backend/httpclient"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/infra/httpclient"
|
||||
@ -17,59 +20,13 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/accesscontrol"
|
||||
acmock "github.com/grafana/grafana/pkg/services/accesscontrol/mock"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/database"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/fakes"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestService(t *testing.T) {
|
||||
cfg := &setting.Cfg{}
|
||||
sqlStore := sqlstore.InitTestDB(t)
|
||||
|
||||
origSecret := setting.SecretKey
|
||||
setting.SecretKey = "datasources_service_test"
|
||||
t.Cleanup(func() {
|
||||
setting.SecretKey = origSecret
|
||||
})
|
||||
|
||||
secretsService := secretsManager.SetupTestService(t, database.ProvideSecretsStore(sqlStore))
|
||||
s := ProvideService(sqlStore, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New().WithDisabled(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
var ds *models.DataSource
|
||||
|
||||
t.Run("create datasource should encrypt the secure json data", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
sjd := map[string]string{"password": "12345"}
|
||||
cmd := models.AddDataSourceCommand{SecureJsonData: sjd}
|
||||
|
||||
err := s.AddDataSource(ctx, &cmd)
|
||||
require.NoError(t, err)
|
||||
|
||||
ds = cmd.Result
|
||||
decrypted, err := s.SecretsService.DecryptJsonData(ctx, ds.SecureJsonData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, sjd, decrypted)
|
||||
})
|
||||
|
||||
t.Run("update datasource should encrypt the secure json data", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
sjd := map[string]string{"password": "678910"}
|
||||
cmd := models.UpdateDataSourceCommand{Id: ds.Id, OrgId: ds.OrgId, SecureJsonData: sjd}
|
||||
err := s.UpdateDataSource(ctx, &cmd)
|
||||
require.NoError(t, err)
|
||||
|
||||
decrypted, err := s.SecretsService.DecryptJsonData(ctx, cmd.Result.SecureJsonData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, sjd, decrypted)
|
||||
})
|
||||
}
|
||||
|
||||
type dataSourceMockRetriever struct {
|
||||
res []*models.DataSource
|
||||
}
|
||||
@ -237,15 +194,16 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
Type: "Kubernetes",
|
||||
}
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
rt1, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt1, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt1)
|
||||
tr1 := configuredTransport
|
||||
|
||||
rt2, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt2, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt2)
|
||||
tr2 := configuredTransport
|
||||
@ -267,24 +225,22 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
|
||||
setting.SecretKey = "password"
|
||||
|
||||
json := simplejson.New()
|
||||
json.Set("tlsAuthWithCACert", true)
|
||||
sjson := simplejson.New()
|
||||
sjson.Set("tlsAuthWithCACert", true)
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
tlsCaCert, err := secretsService.Encrypt(context.Background(), []byte(caCert), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
SecureJsonData: map[string][]byte{"tlsCACert": tlsCaCert},
|
||||
SecureJsonData: map[string][]byte{"tlsCACert": []byte(caCert)},
|
||||
Updated: time.Now().Add(-2 * time.Minute),
|
||||
}
|
||||
|
||||
rt1, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt1, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NotNil(t, rt1)
|
||||
require.NoError(t, err)
|
||||
|
||||
@ -298,7 +254,7 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
ds.SecureJsonData = map[string][]byte{}
|
||||
ds.Updated = time.Now()
|
||||
|
||||
rt2, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt2, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt2)
|
||||
tr2 := configuredTransport
|
||||
@ -317,30 +273,32 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
|
||||
setting.SecretKey = "password"
|
||||
|
||||
json := simplejson.New()
|
||||
json.Set("tlsAuth", true)
|
||||
sjson := simplejson.New()
|
||||
sjson.Set("tlsAuth", true)
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
tlsClientCert, err := secretsService.Encrypt(context.Background(), []byte(clientCert), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
|
||||
tlsClientKey, err := secretsService.Encrypt(context.Background(), []byte(clientKey), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
OrgId: 1,
|
||||
Name: "kubernetes",
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
JsonData: json,
|
||||
SecureJsonData: map[string][]byte{
|
||||
"tlsClientCert": tlsClientCert,
|
||||
"tlsClientKey": tlsClientKey,
|
||||
},
|
||||
JsonData: sjson,
|
||||
}
|
||||
|
||||
rt, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
secureJsonData, err := json.Marshal(map[string]string{
|
||||
"tlsClientCert": clientCert,
|
||||
"tlsClientKey": clientKey,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = secretsStore.Set(context.Background(), ds.OrgId, ds.Name, secretType, string(secureJsonData))
|
||||
require.NoError(t, err)
|
||||
|
||||
rt, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt)
|
||||
tr := configuredTransport
|
||||
@ -359,27 +317,32 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
|
||||
setting.SecretKey = "password"
|
||||
|
||||
json := simplejson.New()
|
||||
json.Set("tlsAuthWithCACert", true)
|
||||
json.Set("serverName", "server-name")
|
||||
sjson := simplejson.New()
|
||||
sjson.Set("tlsAuthWithCACert", true)
|
||||
sjson.Set("serverName", "server-name")
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
tlsCaCert, err := secretsService.Encrypt(context.Background(), []byte(caCert), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
OrgId: 1,
|
||||
Name: "kubernetes",
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
JsonData: json,
|
||||
SecureJsonData: map[string][]byte{
|
||||
"tlsCACert": tlsCaCert,
|
||||
},
|
||||
JsonData: sjson,
|
||||
}
|
||||
|
||||
rt, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
secureJsonData, err := json.Marshal(map[string]string{
|
||||
"tlsCACert": caCert,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = secretsStore.Set(context.Background(), ds.OrgId, ds.Name, secretType, string(secureJsonData))
|
||||
require.NoError(t, err)
|
||||
|
||||
rt, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt)
|
||||
tr := configuredTransport
|
||||
@ -397,25 +360,26 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
},
|
||||
})
|
||||
|
||||
json := simplejson.New()
|
||||
json.Set("tlsSkipVerify", true)
|
||||
sjson := simplejson.New()
|
||||
sjson.Set("tlsSkipVerify", true)
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
JsonData: json,
|
||||
JsonData: sjson,
|
||||
}
|
||||
|
||||
rt1, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt1, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt1)
|
||||
tr1 := configuredTransport
|
||||
|
||||
rt2, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt2, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt2)
|
||||
tr2 := configuredTransport
|
||||
@ -427,25 +391,32 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
t.Run("Should set custom headers if configured in JsonData", func(t *testing.T) {
|
||||
provider := httpclient.NewProvider()
|
||||
|
||||
json := simplejson.NewFromAny(map[string]interface{}{
|
||||
sjson := simplejson.NewFromAny(map[string]interface{}{
|
||||
"httpHeaderName1": "Authorization",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
encryptedData, err := secretsService.Encrypt(context.Background(), []byte(`Bearer xf5yhfkpsnmgo`), secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
JsonData: json,
|
||||
SecureJsonData: map[string][]byte{"httpHeaderValue1": encryptedData},
|
||||
Id: 1,
|
||||
OrgId: 1,
|
||||
Name: "kubernetes",
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
JsonData: sjson,
|
||||
}
|
||||
|
||||
headers := dsService.getCustomHeaders(json, map[string]string{"httpHeaderValue1": "Bearer xf5yhfkpsnmgo"})
|
||||
secureJsonData, err := json.Marshal(map[string]string{
|
||||
"httpHeaderValue1": "Bearer xf5yhfkpsnmgo",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = secretsStore.Set(context.Background(), ds.OrgId, ds.Name, secretType, string(secureJsonData))
|
||||
require.NoError(t, err)
|
||||
|
||||
headers := dsService.getCustomHeaders(sjson, map[string]string{"httpHeaderValue1": "Bearer xf5yhfkpsnmgo"})
|
||||
require.Equal(t, "Bearer xf5yhfkpsnmgo", headers["Authorization"])
|
||||
|
||||
// 1. Start HTTP test server which checks the request headers
|
||||
@ -465,7 +436,7 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
|
||||
// 2. Get HTTP transport from datasource which uses the test server as backend
|
||||
ds.Url = backend.URL
|
||||
rt, err := dsService.GetHTTPTransport(&ds, provider)
|
||||
rt, err := dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, rt)
|
||||
|
||||
@ -486,21 +457,22 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
t.Run("Should use request timeout if configured in JsonData", func(t *testing.T) {
|
||||
provider := httpclient.NewProvider()
|
||||
|
||||
json := simplejson.NewFromAny(map[string]interface{}{
|
||||
sjson := simplejson.NewFromAny(map[string]interface{}{
|
||||
"timeout": 19,
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
Url: "http://k8s:8001",
|
||||
Type: "Kubernetes",
|
||||
JsonData: json,
|
||||
JsonData: sjson,
|
||||
}
|
||||
|
||||
client, err := dsService.GetHTTPClient(&ds, provider)
|
||||
client, err := dsService.GetHTTPClient(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, client)
|
||||
require.Equal(t, 19*time.Second, client.Timeout)
|
||||
@ -520,18 +492,19 @@ func TestService_GetHttpTransport(t *testing.T) {
|
||||
setting.SigV4AuthEnabled = origSigV4Enabled
|
||||
})
|
||||
|
||||
json, err := simplejson.NewJson([]byte(`{ "sigV4Auth": true }`))
|
||||
sjson, err := simplejson.NewJson([]byte(`{ "sigV4Auth": true }`))
|
||||
require.NoError(t, err)
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
ds := models.DataSource{
|
||||
Type: models.DS_ES,
|
||||
JsonData: json,
|
||||
JsonData: sjson,
|
||||
}
|
||||
|
||||
_, err = dsService.GetHTTPTransport(&ds, provider)
|
||||
_, err = dsService.GetHTTPTransport(context.Background(), &ds, provider)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, configuredOpts)
|
||||
require.NotNil(t, configuredOpts.SigV4)
|
||||
@ -558,8 +531,9 @@ func TestService_getTimeout(t *testing.T) {
|
||||
{jsonData: simplejson.NewFromAny(map[string]interface{}{"timeout": "2"}), expectedTimeout: 2 * time.Second},
|
||||
}
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
for _, tc := range testCases {
|
||||
ds := &models.DataSource{
|
||||
@ -569,86 +543,6 @@ func TestService_getTimeout(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_DecryptedValue(t *testing.T) {
|
||||
cfg := &setting.Cfg{}
|
||||
|
||||
t.Run("When datasource hasn't been updated, encrypted JSON should be fetched from cache", func(t *testing.T) {
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
encryptedJsonData, err := secretsService.EncryptJsonData(
|
||||
context.Background(),
|
||||
map[string]string{
|
||||
"password": "password",
|
||||
}, secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
Type: models.DS_INFLUXDB_08,
|
||||
JsonData: simplejson.New(),
|
||||
User: "user",
|
||||
SecureJsonData: encryptedJsonData,
|
||||
}
|
||||
|
||||
// Populate cache
|
||||
password, ok := dsService.DecryptedValue(&ds, "password")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "password", password)
|
||||
|
||||
encryptedJsonData, err = secretsService.EncryptJsonData(
|
||||
context.Background(),
|
||||
map[string]string{
|
||||
"password": "",
|
||||
}, secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
|
||||
ds.SecureJsonData = encryptedJsonData
|
||||
|
||||
password, ok = dsService.DecryptedValue(&ds, "password")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "password", password)
|
||||
})
|
||||
|
||||
t.Run("When datasource is updated, encrypted JSON should not be fetched from cache", func(t *testing.T) {
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
|
||||
encryptedJsonData, err := secretsService.EncryptJsonData(
|
||||
context.Background(),
|
||||
map[string]string{
|
||||
"password": "password",
|
||||
}, secrets.WithoutScope())
|
||||
require.NoError(t, err)
|
||||
|
||||
ds := models.DataSource{
|
||||
Id: 1,
|
||||
Type: models.DS_INFLUXDB_08,
|
||||
JsonData: simplejson.New(),
|
||||
User: "user",
|
||||
SecureJsonData: encryptedJsonData,
|
||||
}
|
||||
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
// Populate cache
|
||||
password, ok := dsService.DecryptedValue(&ds, "password")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "password", password)
|
||||
|
||||
ds.SecureJsonData, err = secretsService.EncryptJsonData(
|
||||
context.Background(),
|
||||
map[string]string{
|
||||
"password": "",
|
||||
}, secrets.WithoutScope())
|
||||
ds.Updated = time.Now()
|
||||
require.NoError(t, err)
|
||||
|
||||
password, ok = dsService.DecryptedValue(&ds, "password")
|
||||
require.True(t, ok)
|
||||
require.Empty(t, password)
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_HTTPClientOptions(t *testing.T) {
|
||||
cfg := &setting.Cfg{
|
||||
Azure: &azsettings.AzureSettings{},
|
||||
@ -678,10 +572,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"azureEndpointResourceId": "https://api.example.com/abd5c4ce-ca73-41e9-9cb2-bed39aa2adb5",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
opts, err := dsService.httpClientOptions(&ds)
|
||||
opts, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NotNil(t, opts.Middlewares)
|
||||
@ -695,10 +590,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"httpMethod": "POST",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
opts, err := dsService.httpClientOptions(&ds)
|
||||
opts, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
if opts.Middlewares != nil {
|
||||
@ -714,10 +610,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"azureCredentials": "invalid",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
_, err := dsService.httpClientOptions(&ds)
|
||||
_, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
@ -732,10 +629,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"azureEndpointResourceId": "https://api.example.com/abd5c4ce-ca73-41e9-9cb2-bed39aa2adb5",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
opts, err := dsService.httpClientOptions(&ds)
|
||||
opts, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NotNil(t, opts.Middlewares)
|
||||
@ -750,10 +648,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"azureEndpointResourceId": "https://api.example.com/abd5c4ce-ca73-41e9-9cb2-bed39aa2adb5",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
opts, err := dsService.httpClientOptions(&ds)
|
||||
opts, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
if opts.Middlewares != nil {
|
||||
@ -772,10 +671,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"azureEndpointResourceId": "invalid",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, features, acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
_, err := dsService.httpClientOptions(&ds)
|
||||
_, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
})
|
||||
@ -792,10 +692,11 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
"azureEndpointResourceId": "https://api.example.com/abd5c4ce-ca73-41e9-9cb2-bed39aa2adb5",
|
||||
})
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
opts, err := dsService.httpClientOptions(&ds)
|
||||
opts, err := dsService.httpClientOptions(context.Background(), &ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
if opts.Middlewares != nil {
|
||||
@ -806,6 +707,59 @@ func TestService_HTTPClientOptions(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestService_GetDecryptedValues(t *testing.T) {
|
||||
t.Run("should migrate and retrieve values from secure json data", func(t *testing.T) {
|
||||
ds := &models.DataSource{
|
||||
Id: 1,
|
||||
Url: "https://api.example.com",
|
||||
Type: "prometheus",
|
||||
}
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, nil, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
jsonData := map[string]string{
|
||||
"password": "securePassword",
|
||||
}
|
||||
secureJsonData, err := dsService.SecretsService.EncryptJsonData(context.Background(), jsonData, secrets.WithoutScope())
|
||||
|
||||
require.NoError(t, err)
|
||||
ds.SecureJsonData = secureJsonData
|
||||
|
||||
values, err := dsService.DecryptedValues(context.Background(), ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, jsonData, values)
|
||||
})
|
||||
|
||||
t.Run("should retrieve values from secret store", func(t *testing.T) {
|
||||
ds := &models.DataSource{
|
||||
Id: 1,
|
||||
Url: "https://api.example.com",
|
||||
Type: "prometheus",
|
||||
}
|
||||
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := ProvideService(nil, secretsService, secretsStore, nil, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
jsonData := map[string]string{
|
||||
"password": "securePassword",
|
||||
}
|
||||
jsonString, err := json.Marshal(jsonData)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = secretsStore.Set(context.Background(), ds.OrgId, ds.Name, secretType, string(jsonString))
|
||||
require.NoError(t, err)
|
||||
|
||||
values, err := dsService.DecryptedValues(context.Background(), ds)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, jsonData, values)
|
||||
})
|
||||
}
|
||||
|
||||
const caCert string = `-----BEGIN CERTIFICATE-----
|
||||
MIIDATCCAemgAwIBAgIJAMQ5hC3CPDTeMA0GCSqGSIb3DQEBCwUAMBcxFTATBgNV
|
||||
BAMMDGNhLWs4cy1zdGhsbTAeFw0xNjEwMjcwODQyMjdaFw00NDAzMTQwODQyMjda
|
||||
|
103
pkg/services/export/dummy_job.go
Normal file
103
pkg/services/export/dummy_job.go
Normal file
@ -0,0 +1,103 @@
|
||||
package export
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/rand"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
)
|
||||
|
||||
var _ Job = new(dummyExportJob)
|
||||
|
||||
type dummyExportJob struct {
|
||||
logger log.Logger
|
||||
|
||||
statusMu sync.Mutex
|
||||
status ExportStatus
|
||||
cfg ExportConfig
|
||||
broadcaster statusBroadcaster
|
||||
}
|
||||
|
||||
func startDummyExportJob(cfg ExportConfig, broadcaster statusBroadcaster) (Job, error) {
|
||||
if cfg.Format != "git" {
|
||||
return nil, errors.New("only git format is supported")
|
||||
}
|
||||
|
||||
job := &dummyExportJob{
|
||||
logger: log.New("dummy_export_job"),
|
||||
cfg: cfg,
|
||||
broadcaster: broadcaster,
|
||||
status: ExportStatus{
|
||||
Running: true,
|
||||
Target: "git export",
|
||||
Started: time.Now().UnixMilli(),
|
||||
Count: int64(math.Round(10 + rand.Float64()*20)),
|
||||
Current: 0,
|
||||
},
|
||||
}
|
||||
|
||||
broadcaster(job.status)
|
||||
go job.start()
|
||||
return job, nil
|
||||
}
|
||||
|
||||
func (e *dummyExportJob) start() {
|
||||
defer func() {
|
||||
e.logger.Info("Finished dummy export job")
|
||||
|
||||
e.statusMu.Lock()
|
||||
defer e.statusMu.Unlock()
|
||||
s := e.status
|
||||
if err := recover(); err != nil {
|
||||
e.logger.Error("export panic", "error", err)
|
||||
s.Status = fmt.Sprintf("ERROR: %v", err)
|
||||
}
|
||||
// Make sure it finishes OK
|
||||
if s.Finished < 10 {
|
||||
s.Finished = time.Now().UnixMilli()
|
||||
}
|
||||
s.Running = false
|
||||
if s.Status == "" {
|
||||
s.Status = "done"
|
||||
}
|
||||
e.status = s
|
||||
e.broadcaster(s)
|
||||
}()
|
||||
|
||||
e.logger.Info("Starting dummy export job")
|
||||
|
||||
ticker := time.NewTicker(1 * time.Second)
|
||||
for t := range ticker.C {
|
||||
e.statusMu.Lock()
|
||||
e.status.Changed = t.UnixMilli()
|
||||
e.status.Current++
|
||||
e.status.Last = fmt.Sprintf("ITEM: %d", e.status.Current)
|
||||
e.statusMu.Unlock()
|
||||
|
||||
// Wait till we are done
|
||||
shouldStop := e.status.Current >= e.status.Count
|
||||
e.broadcaster(e.status)
|
||||
|
||||
if shouldStop {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (e *dummyExportJob) getStatus() ExportStatus {
|
||||
e.statusMu.Lock()
|
||||
defer e.statusMu.Unlock()
|
||||
|
||||
return e.status
|
||||
}
|
||||
|
||||
func (e *dummyExportJob) getConfig() ExportConfig {
|
||||
e.statusMu.Lock()
|
||||
defer e.statusMu.Unlock()
|
||||
|
||||
return e.cfg
|
||||
}
|
93
pkg/services/export/service.go
Normal file
93
pkg/services/export/service.go
Normal file
@ -0,0 +1,93 @@
|
||||
package export
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"sync"
|
||||
|
||||
"github.com/grafana/grafana/pkg/api/response"
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/live"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
)
|
||||
|
||||
type ExportService interface {
|
||||
// List folder contents
|
||||
HandleGetStatus(c *models.ReqContext) response.Response
|
||||
|
||||
// Read raw file contents out of the store
|
||||
HandleRequestExport(c *models.ReqContext) response.Response
|
||||
}
|
||||
|
||||
type StandardExport struct {
|
||||
logger log.Logger
|
||||
sql *sqlstore.SQLStore
|
||||
glive *live.GrafanaLive
|
||||
mutex sync.Mutex
|
||||
|
||||
// updated with mutex
|
||||
exportJob Job
|
||||
}
|
||||
|
||||
func ProvideService(sql *sqlstore.SQLStore, features featuremgmt.FeatureToggles, gl *live.GrafanaLive) ExportService {
|
||||
if !features.IsEnabled(featuremgmt.FlagExport) {
|
||||
return &StubExport{}
|
||||
}
|
||||
|
||||
return &StandardExport{
|
||||
sql: sql,
|
||||
glive: gl,
|
||||
logger: log.New("export_service"),
|
||||
exportJob: &stoppedJob{},
|
||||
}
|
||||
}
|
||||
|
||||
func (ex *StandardExport) HandleGetStatus(c *models.ReqContext) response.Response {
|
||||
ex.mutex.Lock()
|
||||
defer ex.mutex.Unlock()
|
||||
|
||||
return response.JSON(http.StatusOK, ex.exportJob.getStatus())
|
||||
}
|
||||
|
||||
func (ex *StandardExport) HandleRequestExport(c *models.ReqContext) response.Response {
|
||||
var cfg ExportConfig
|
||||
err := json.NewDecoder(c.Req.Body).Decode(&cfg)
|
||||
if err != nil {
|
||||
return response.Error(http.StatusBadRequest, "unable to read config", err)
|
||||
}
|
||||
|
||||
ex.mutex.Lock()
|
||||
defer ex.mutex.Unlock()
|
||||
|
||||
status := ex.exportJob.getStatus()
|
||||
if status.Running {
|
||||
ex.logger.Error("export already running")
|
||||
return response.Error(http.StatusLocked, "export already running", nil)
|
||||
}
|
||||
|
||||
job, err := startDummyExportJob(cfg, func(s ExportStatus) {
|
||||
ex.broadcastStatus(c.OrgId, s)
|
||||
})
|
||||
if err != nil {
|
||||
ex.logger.Error("failed to start export job", "err", err)
|
||||
return response.Error(http.StatusBadRequest, "failed to start export job", err)
|
||||
}
|
||||
|
||||
ex.exportJob = job
|
||||
return response.JSON(http.StatusOK, ex.exportJob.getStatus())
|
||||
}
|
||||
|
||||
func (ex *StandardExport) broadcastStatus(orgID int64, s ExportStatus) {
|
||||
msg, err := json.Marshal(s)
|
||||
if err != nil {
|
||||
ex.logger.Warn("Error making message", "err", err)
|
||||
return
|
||||
}
|
||||
err = ex.glive.Publish(orgID, "grafana/broadcast/export", msg)
|
||||
if err != nil {
|
||||
ex.logger.Warn("Error Publish message", "err", err)
|
||||
return
|
||||
}
|
||||
}
|
19
pkg/services/export/stopped_job.go
Normal file
19
pkg/services/export/stopped_job.go
Normal file
@ -0,0 +1,19 @@
|
||||
package export
|
||||
|
||||
import "time"
|
||||
|
||||
var _ Job = new(stoppedJob)
|
||||
|
||||
type stoppedJob struct {
|
||||
}
|
||||
|
||||
func (e *stoppedJob) getStatus() ExportStatus {
|
||||
return ExportStatus{
|
||||
Running: false,
|
||||
Changed: time.Now().UnixMilli(),
|
||||
}
|
||||
}
|
||||
|
||||
func (e *stoppedJob) getConfig() ExportConfig {
|
||||
return ExportConfig{}
|
||||
}
|
20
pkg/services/export/stub.go
Normal file
20
pkg/services/export/stub.go
Normal file
@ -0,0 +1,20 @@
|
||||
package export
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/grafana/grafana/pkg/api/response"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
)
|
||||
|
||||
var _ ExportService = new(StubExport)
|
||||
|
||||
type StubExport struct{}
|
||||
|
||||
func (ex *StubExport) HandleGetStatus(c *models.ReqContext) response.Response {
|
||||
return response.Error(http.StatusForbidden, "feature not enabled", nil)
|
||||
}
|
||||
|
||||
func (ex *StubExport) HandleRequestExport(c *models.ReqContext) response.Response {
|
||||
return response.Error(http.StatusForbidden, "feature not enabled", nil)
|
||||
}
|
36
pkg/services/export/types.go
Normal file
36
pkg/services/export/types.go
Normal file
@ -0,0 +1,36 @@
|
||||
package export
|
||||
|
||||
// Export status. Only one running at a time
|
||||
type ExportStatus struct {
|
||||
Running bool `json:"running"`
|
||||
Target string `json:"target"` // description of where it is going (no secrets)
|
||||
Started int64 `json:"started,omitempty"`
|
||||
Finished int64 `json:"finished,omitempty"`
|
||||
Changed int64 `json:"update,omitempty"`
|
||||
Count int64 `json:"count,omitempty"`
|
||||
Current int64 `json:"current,omitempty"`
|
||||
Last string `json:"last,omitempty"`
|
||||
Status string `json:"status"` // ERROR, SUCCESS, ETC
|
||||
}
|
||||
|
||||
// Basic export config (for now)
|
||||
type ExportConfig struct {
|
||||
Format string `json:"format"`
|
||||
Git GitExportConfig `json:"git"`
|
||||
}
|
||||
|
||||
type GitExportConfig struct {
|
||||
// General folder is either at the root or as a subfolder
|
||||
GeneralAtRoot bool `json:"generalAtRoot"`
|
||||
|
||||
// Keeping all history is nice, but much slower
|
||||
ExcludeHistory bool `json:"excludeHistory"`
|
||||
}
|
||||
|
||||
type Job interface {
|
||||
getStatus() ExportStatus
|
||||
getConfig() ExportConfig
|
||||
}
|
||||
|
||||
// Will broadcast the live status
|
||||
type statusBroadcaster func(s ExportStatus)
|
@ -190,6 +190,12 @@ var (
|
||||
Description: "Provisioning-friendly routes for alerting",
|
||||
State: FeatureStateAlpha,
|
||||
},
|
||||
{
|
||||
Name: "export",
|
||||
Description: "Export grafana instance (to git, etc)",
|
||||
State: FeatureStateAlpha,
|
||||
RequiresDevMode: true,
|
||||
},
|
||||
{
|
||||
Name: "storageLocalUpload",
|
||||
Description: "allow uploads to local storage",
|
||||
|
@ -143,6 +143,10 @@ const (
|
||||
// Provisioning-friendly routes for alerting
|
||||
FlagAlertProvisioning = "alertProvisioning"
|
||||
|
||||
// FlagExport
|
||||
// Export grafana instance (to git, etc)
|
||||
FlagExport = "export"
|
||||
|
||||
// FlagStorageLocalUpload
|
||||
// allow uploads to local storage
|
||||
FlagStorageLocalUpload = "storageLocalUpload"
|
||||
|
@ -13,6 +13,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/accesscontrol"
|
||||
acMock "github.com/grafana/grafana/pkg/services/accesscontrol/mock"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
fakes "github.com/grafana/grafana/pkg/services/datasources/fakes"
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/api/tooling/definitions"
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/eval"
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/models"
|
||||
@ -61,7 +62,7 @@ func TestRouteTestGrafanaRuleConfig(t *testing.T) {
|
||||
{Action: datasources.ActionQuery, Scope: datasources.ScopeProvider.GetResourceScopeUID(data2.DatasourceUID)},
|
||||
})
|
||||
|
||||
ds := &datasources.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
ds := &fakes.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
{Uid: data1.DatasourceUID},
|
||||
{Uid: data2.DatasourceUID},
|
||||
}}
|
||||
@ -102,7 +103,7 @@ func TestRouteTestGrafanaRuleConfig(t *testing.T) {
|
||||
t.Run("should require user to be signed in", func(t *testing.T) {
|
||||
data1 := models.GenerateAlertQuery()
|
||||
|
||||
ds := &datasources.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
ds := &fakes.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
{Uid: data1.DatasourceUID},
|
||||
}}
|
||||
|
||||
@ -182,7 +183,7 @@ func TestRouteEvalQueries(t *testing.T) {
|
||||
{Action: datasources.ActionQuery, Scope: datasources.ScopeProvider.GetResourceScopeUID(data2.DatasourceUID)},
|
||||
})
|
||||
|
||||
ds := &datasources.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
ds := &fakes.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
{Uid: data1.DatasourceUID},
|
||||
{Uid: data2.DatasourceUID},
|
||||
}}
|
||||
@ -226,7 +227,7 @@ func TestRouteEvalQueries(t *testing.T) {
|
||||
t.Run("should require user to be signed in", func(t *testing.T) {
|
||||
data1 := models.GenerateAlertQuery()
|
||||
|
||||
ds := &datasources.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
ds := &fakes.FakeCacheService{DataSources: []*models2.DataSource{
|
||||
{Uid: data1.DatasourceUID},
|
||||
}}
|
||||
|
||||
@ -265,7 +266,7 @@ func TestRouteEvalQueries(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func createTestingApiSrv(ds *datasources.FakeCacheService, ac *acMock.Mock, evaluator *eval.FakeEvaluator) *TestingApiSrv {
|
||||
func createTestingApiSrv(ds *fakes.FakeCacheService, ac *acMock.Mock, evaluator *eval.FakeEvaluator) *TestingApiSrv {
|
||||
if ac == nil {
|
||||
ac = acMock.New().WithDisabled()
|
||||
}
|
||||
|
@ -36,7 +36,7 @@ func TestDashboardsAsConfig(t *testing.T) {
|
||||
|
||||
for i := 1; i <= 2; i++ {
|
||||
orgCommand := models.CreateOrgCommand{Name: fmt.Sprintf("Main Org. %v", i)}
|
||||
err := sqlstore.CreateOrg(context.Background(), &orgCommand)
|
||||
err := store.CreateOrg(context.Background(), &orgCommand)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
|
@ -39,7 +39,7 @@ func TestNotificationAsConfig(t *testing.T) {
|
||||
|
||||
for i := 1; i < 5; i++ {
|
||||
orgCommand := models.CreateOrgCommand{Name: fmt.Sprintf("Main Org. %v", i)}
|
||||
err := sqlstore.CreateOrg(context.Background(), &orgCommand)
|
||||
err := sqlStore.CreateOrg(context.Background(), &orgCommand)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
|
@ -15,7 +15,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/plugins/adapters"
|
||||
"github.com/grafana/grafana/pkg/services/datasources"
|
||||
"github.com/grafana/grafana/pkg/services/oauthtoken"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/tsdb/grafanads"
|
||||
"github.com/grafana/grafana/pkg/tsdb/legacydata"
|
||||
@ -33,7 +32,7 @@ func ProvideService(
|
||||
dataSourceCache datasources.CacheService,
|
||||
expressionService *expr.Service,
|
||||
pluginRequestValidator models.PluginRequestValidator,
|
||||
SecretsService secrets.Service,
|
||||
dataSourceService datasources.DataSourceService,
|
||||
pluginClient plugins.Client,
|
||||
oAuthTokenService oauthtoken.OAuthTokenService,
|
||||
) *Service {
|
||||
@ -42,7 +41,7 @@ func ProvideService(
|
||||
dataSourceCache: dataSourceCache,
|
||||
expressionService: expressionService,
|
||||
pluginRequestValidator: pluginRequestValidator,
|
||||
secretsService: SecretsService,
|
||||
dataSourceService: dataSourceService,
|
||||
pluginClient: pluginClient,
|
||||
oAuthTokenService: oAuthTokenService,
|
||||
log: log.New("query_data"),
|
||||
@ -56,7 +55,7 @@ type Service struct {
|
||||
dataSourceCache datasources.CacheService
|
||||
expressionService *expr.Service
|
||||
pluginRequestValidator models.PluginRequestValidator
|
||||
secretsService secrets.Service
|
||||
dataSourceService datasources.DataSourceService
|
||||
pluginClient plugins.Client
|
||||
oAuthTokenService oauthtoken.OAuthTokenService
|
||||
log log.Logger
|
||||
@ -291,9 +290,9 @@ func (s *Service) getDataSourceFromQuery(ctx context.Context, user *models.Signe
|
||||
return nil, NewErrBadQuery("missing data source ID/UID")
|
||||
}
|
||||
|
||||
func (s *Service) decryptSecureJsonDataFn(ctx context.Context) func(map[string][]byte) map[string]string {
|
||||
return func(m map[string][]byte) map[string]string {
|
||||
decryptedJsonData, err := s.secretsService.DecryptJsonData(ctx, m)
|
||||
func (s *Service) decryptSecureJsonDataFn(ctx context.Context) func(ds *models.DataSource) map[string]string {
|
||||
return func(ds *models.DataSource) map[string]string {
|
||||
decryptedJsonData, err := s.dataSourceService.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
s.log.Error("Failed to decrypt secure json data", "error", err)
|
||||
}
|
||||
|
@ -2,6 +2,7 @@ package query_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"testing"
|
||||
|
||||
@ -12,18 +13,29 @@ import (
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
acmock "github.com/grafana/grafana/pkg/services/accesscontrol/mock"
|
||||
datasources "github.com/grafana/grafana/pkg/services/datasources/service"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/query"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/fakes"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestQueryData(t *testing.T) {
|
||||
t.Run("it attaches custom headers to the request", func(t *testing.T) {
|
||||
tc := setup()
|
||||
tc := setup(t)
|
||||
tc.dataSourceCache.ds.JsonData = simplejson.NewFromAny(map[string]interface{}{"httpHeaderName1": "foo", "httpHeaderName2": "bar"})
|
||||
tc.secretService.decryptedJson = map[string]string{"httpHeaderValue1": "test-header", "httpHeaderValue2": "test-header2"}
|
||||
|
||||
_, err := tc.queryService.QueryData(context.Background(), nil, true, metricRequest(), false)
|
||||
secureJsonData, err := json.Marshal(map[string]string{"httpHeaderValue1": "test-header", "httpHeaderValue2": "test-header2"})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = tc.secretStore.Set(context.Background(), tc.dataSourceCache.ds.OrgId, tc.dataSourceCache.ds.Name, "datasource", string(secureJsonData))
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = tc.queryService.QueryData(context.Background(), nil, true, metricRequest(), false)
|
||||
require.Nil(t, err)
|
||||
|
||||
require.Equal(t, map[string]string{"foo": "test-header", "bar": "test-header2"}, tc.pluginContext.req.Headers)
|
||||
@ -36,7 +48,7 @@ func TestQueryData(t *testing.T) {
|
||||
}
|
||||
token = token.WithExtra(map[string]interface{}{"id_token": "id-token"})
|
||||
|
||||
tc := setup()
|
||||
tc := setup(t)
|
||||
tc.oauthTokenService.passThruEnabled = true
|
||||
tc.oauthTokenService.token = token
|
||||
|
||||
@ -51,26 +63,29 @@ func TestQueryData(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func setup() *testContext {
|
||||
func setup(t *testing.T) *testContext {
|
||||
pc := &fakePluginClient{}
|
||||
sc := &fakeSecretsService{}
|
||||
dc := &fakeDataSourceCache{ds: &models.DataSource{}}
|
||||
tc := &fakeOAuthTokenService{}
|
||||
rv := &fakePluginRequestValidator{}
|
||||
|
||||
ss := kvstore.SetupTestService(t)
|
||||
ssvc := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
ds := datasources.ProvideService(nil, ssvc, ss, nil, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
|
||||
return &testContext{
|
||||
pluginContext: pc,
|
||||
secretService: sc,
|
||||
secretStore: ss,
|
||||
dataSourceCache: dc,
|
||||
oauthTokenService: tc,
|
||||
pluginRequestValidator: rv,
|
||||
queryService: query.ProvideService(nil, dc, nil, rv, sc, pc, tc),
|
||||
queryService: query.ProvideService(nil, dc, nil, rv, ds, pc, tc),
|
||||
}
|
||||
}
|
||||
|
||||
type testContext struct {
|
||||
pluginContext *fakePluginClient
|
||||
secretService *fakeSecretsService
|
||||
secretStore kvstore.SecretsKVStore
|
||||
dataSourceCache *fakeDataSourceCache
|
||||
oauthTokenService *fakeOAuthTokenService
|
||||
pluginRequestValidator *fakePluginRequestValidator
|
||||
@ -108,16 +123,6 @@ func (ts *fakeOAuthTokenService) IsOAuthPassThruEnabled(*models.DataSource) bool
|
||||
return ts.passThruEnabled
|
||||
}
|
||||
|
||||
type fakeSecretsService struct {
|
||||
secrets.Service
|
||||
|
||||
decryptedJson map[string]string
|
||||
}
|
||||
|
||||
func (s *fakeSecretsService) DecryptJsonData(ctx context.Context, sjd map[string][]byte) (map[string]string, error) {
|
||||
return s.decryptedJson, nil
|
||||
}
|
||||
|
||||
type fakeDataSourceCache struct {
|
||||
ds *models.DataSource
|
||||
}
|
||||
|
29
pkg/services/secrets/kvstore/helpers.go
Normal file
29
pkg/services/secrets/kvstore/helpers.go
Normal file
@ -0,0 +1,29 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/database"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
)
|
||||
|
||||
func SetupTestService(t *testing.T) SecretsKVStore {
|
||||
t.Helper()
|
||||
|
||||
sqlStore := sqlstore.InitTestDB(t)
|
||||
store := database.ProvideSecretsStore(sqlstore.InitTestDB(t))
|
||||
secretsService := manager.SetupTestService(t, store)
|
||||
|
||||
kv := &secretsKVStoreSQL{
|
||||
sqlStore: sqlStore,
|
||||
log: log.New("secrets.kvstore"),
|
||||
secretsService: secretsService,
|
||||
decryptionCache: decryptionCache{
|
||||
cache: make(map[int64]cachedDecrypted),
|
||||
},
|
||||
}
|
||||
|
||||
return kv
|
||||
}
|
77
pkg/services/secrets/kvstore/kvstore.go
Normal file
77
pkg/services/secrets/kvstore/kvstore.go
Normal file
@ -0,0 +1,77 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
)
|
||||
|
||||
const (
|
||||
// Wildcard to query all organizations
|
||||
AllOrganizations = -1
|
||||
)
|
||||
|
||||
func ProvideService(sqlStore sqlstore.Store, secretsService secrets.Service) SecretsKVStore {
|
||||
return &secretsKVStoreSQL{
|
||||
sqlStore: sqlStore,
|
||||
secretsService: secretsService,
|
||||
log: log.New("secrets.kvstore"),
|
||||
decryptionCache: decryptionCache{
|
||||
cache: make(map[int64]cachedDecrypted),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// SecretsKVStore is an interface for k/v store.
|
||||
type SecretsKVStore interface {
|
||||
Get(ctx context.Context, orgId int64, namespace string, typ string) (string, bool, error)
|
||||
Set(ctx context.Context, orgId int64, namespace string, typ string, value string) error
|
||||
Del(ctx context.Context, orgId int64, namespace string, typ string) error
|
||||
Keys(ctx context.Context, orgId int64, namespace string, typ string) ([]Key, error)
|
||||
Rename(ctx context.Context, orgId int64, namespace string, typ string, newNamespace string) error
|
||||
}
|
||||
|
||||
// WithType returns a kvstore wrapper with fixed orgId and type.
|
||||
func With(kv SecretsKVStore, orgId int64, namespace string, typ string) *FixedKVStore {
|
||||
return &FixedKVStore{
|
||||
kvStore: kv,
|
||||
OrgId: orgId,
|
||||
Namespace: namespace,
|
||||
Type: typ,
|
||||
}
|
||||
}
|
||||
|
||||
// FixedKVStore is a SecretsKVStore wrapper with fixed orgId, namespace and type.
|
||||
type FixedKVStore struct {
|
||||
kvStore SecretsKVStore
|
||||
OrgId int64
|
||||
Namespace string
|
||||
Type string
|
||||
}
|
||||
|
||||
func (kv *FixedKVStore) Get(ctx context.Context) (string, bool, error) {
|
||||
return kv.kvStore.Get(ctx, kv.OrgId, kv.Namespace, kv.Type)
|
||||
}
|
||||
|
||||
func (kv *FixedKVStore) Set(ctx context.Context, value string) error {
|
||||
return kv.kvStore.Set(ctx, kv.OrgId, kv.Namespace, kv.Type, value)
|
||||
}
|
||||
|
||||
func (kv *FixedKVStore) Del(ctx context.Context) error {
|
||||
return kv.kvStore.Del(ctx, kv.OrgId, kv.Namespace, kv.Type)
|
||||
}
|
||||
|
||||
func (kv *FixedKVStore) Keys(ctx context.Context) ([]Key, error) {
|
||||
return kv.kvStore.Keys(ctx, kv.OrgId, kv.Namespace, kv.Type)
|
||||
}
|
||||
|
||||
func (kv *FixedKVStore) Rename(ctx context.Context, newNamespace string) error {
|
||||
err := kv.kvStore.Rename(ctx, kv.OrgId, kv.Namespace, kv.Type, newNamespace)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
kv.Namespace = newNamespace
|
||||
return nil
|
||||
}
|
226
pkg/services/secrets/kvstore/kvstore_test.go
Normal file
226
pkg/services/secrets/kvstore/kvstore_test.go
Normal file
@ -0,0 +1,226 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
type TestCase struct {
|
||||
OrgId int64
|
||||
Namespace string
|
||||
Type string
|
||||
Revision int64
|
||||
}
|
||||
|
||||
func (t *TestCase) Value() string {
|
||||
return fmt.Sprintf("%d:%s:%s:%d", t.OrgId, t.Namespace, t.Type, t.Revision)
|
||||
}
|
||||
|
||||
func TestKVStore(t *testing.T) {
|
||||
kv := SetupTestService(t)
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
testCases := []*TestCase{
|
||||
{
|
||||
OrgId: 0,
|
||||
Namespace: "namespace1",
|
||||
Type: "testing1",
|
||||
},
|
||||
{
|
||||
OrgId: 0,
|
||||
Namespace: "namespace2",
|
||||
Type: "testing2",
|
||||
},
|
||||
{
|
||||
OrgId: 1,
|
||||
Namespace: "namespace1",
|
||||
Type: "testing1",
|
||||
},
|
||||
{
|
||||
OrgId: 1,
|
||||
Namespace: "namespace3",
|
||||
Type: "testing3",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
err := kv.Set(ctx, tc.OrgId, tc.Namespace, tc.Type, tc.Value())
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
t.Run("get existing keys", func(t *testing.T) {
|
||||
for _, tc := range testCases {
|
||||
value, ok, err := kv.Get(ctx, tc.OrgId, tc.Namespace, tc.Type)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, tc.Value(), value)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("get nonexistent keys", func(t *testing.T) {
|
||||
tcs := []*TestCase{
|
||||
{
|
||||
OrgId: 0,
|
||||
Namespace: "namespace3",
|
||||
Type: "testing3",
|
||||
},
|
||||
{
|
||||
OrgId: 1,
|
||||
Namespace: "namespace2",
|
||||
Type: "testing2",
|
||||
},
|
||||
{
|
||||
OrgId: 2,
|
||||
Namespace: "namespace1",
|
||||
Type: "testing1",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tcs {
|
||||
value, ok, err := kv.Get(ctx, tc.OrgId, tc.Namespace, tc.Type)
|
||||
require.Nil(t, err)
|
||||
require.False(t, ok)
|
||||
require.Equal(t, "", value)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("modify existing key", func(t *testing.T) {
|
||||
tc := testCases[0]
|
||||
|
||||
value, ok, err := kv.Get(ctx, tc.OrgId, tc.Namespace, tc.Type)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, tc.Value(), value)
|
||||
|
||||
tc.Revision += 1
|
||||
|
||||
err = kv.Set(ctx, tc.OrgId, tc.Namespace, tc.Type, tc.Value())
|
||||
require.NoError(t, err)
|
||||
|
||||
value, ok, err = kv.Get(ctx, tc.OrgId, tc.Namespace, tc.Type)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, tc.Value(), value)
|
||||
})
|
||||
|
||||
t.Run("use fixed client", func(t *testing.T) {
|
||||
tc := testCases[0]
|
||||
|
||||
client := With(kv, tc.OrgId, tc.Namespace, tc.Type)
|
||||
fmt.Println(client.Namespace, client.OrgId, client.Type)
|
||||
|
||||
value, ok, err := client.Get(ctx)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, tc.Value(), value)
|
||||
|
||||
tc.Revision += 1
|
||||
|
||||
err = client.Set(ctx, tc.Value())
|
||||
require.NoError(t, err)
|
||||
|
||||
value, ok, err = client.Get(ctx)
|
||||
require.NoError(t, err)
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, tc.Value(), value)
|
||||
})
|
||||
|
||||
t.Run("deleting keys", func(t *testing.T) {
|
||||
var stillHasKeys bool
|
||||
for _, tc := range testCases {
|
||||
if _, ok, err := kv.Get(ctx, tc.OrgId, tc.Namespace, tc.Type); err == nil && ok {
|
||||
stillHasKeys = true
|
||||
break
|
||||
}
|
||||
}
|
||||
require.True(t, stillHasKeys,
|
||||
"we are going to test key deletion, but there are no keys to delete in the database")
|
||||
for _, tc := range testCases {
|
||||
err := kv.Del(ctx, tc.OrgId, tc.Namespace, tc.Type)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
_, ok, err := kv.Get(ctx, tc.OrgId, tc.Namespace, tc.Type)
|
||||
require.NoError(t, err)
|
||||
require.False(t, ok, "all keys should be deleted at this point")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("listing existing keys", func(t *testing.T) {
|
||||
kv := SetupTestService(t)
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
namespace, typ := "listtest", "listtest"
|
||||
|
||||
testCases := []*TestCase{
|
||||
{
|
||||
OrgId: 1,
|
||||
Type: typ,
|
||||
Namespace: namespace,
|
||||
},
|
||||
{
|
||||
OrgId: 2,
|
||||
Type: typ,
|
||||
Namespace: namespace,
|
||||
},
|
||||
{
|
||||
OrgId: 3,
|
||||
Type: typ,
|
||||
Namespace: namespace,
|
||||
},
|
||||
{
|
||||
OrgId: 4,
|
||||
Type: typ,
|
||||
Namespace: namespace,
|
||||
},
|
||||
{
|
||||
OrgId: 1,
|
||||
Type: typ,
|
||||
Namespace: "other_key",
|
||||
},
|
||||
{
|
||||
OrgId: 4,
|
||||
Type: typ,
|
||||
Namespace: "another_one",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
err := kv.Set(ctx, tc.OrgId, tc.Namespace, tc.Type, tc.Value())
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
keys, err := kv.Keys(ctx, AllOrganizations, namespace, typ)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Len(t, keys, 4)
|
||||
|
||||
found := 0
|
||||
|
||||
for _, key := range keys {
|
||||
for _, tc := range testCases {
|
||||
if key.OrgId == tc.OrgId && key.Namespace == tc.Namespace && key.Type == tc.Type {
|
||||
found++
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
require.Equal(t, 4, found, "querying for all orgs should return 4 records")
|
||||
|
||||
keys, err = kv.Keys(ctx, 1, namespace, typ)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Len(t, keys, 1, "querying for a specific org should return 1 record")
|
||||
|
||||
keys, err = kv.Keys(ctx, AllOrganizations, "not_existing_namespace", "not_existing_type")
|
||||
require.NoError(t, err, "querying a not existing namespace should not throw an error")
|
||||
require.Len(t, keys, 0, "querying a not existing namespace should return an empty slice")
|
||||
})
|
||||
}
|
31
pkg/services/secrets/kvstore/model.go
Normal file
31
pkg/services/secrets/kvstore/model.go
Normal file
@ -0,0 +1,31 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// Item stored in k/v store.
|
||||
type Item struct {
|
||||
Id int64
|
||||
OrgId *int64
|
||||
Namespace *string
|
||||
Type *string
|
||||
Value string
|
||||
|
||||
Created time.Time
|
||||
Updated time.Time
|
||||
}
|
||||
|
||||
func (i *Item) TableName() string {
|
||||
return "secrets"
|
||||
}
|
||||
|
||||
type Key struct {
|
||||
OrgId int64
|
||||
Namespace string
|
||||
Type string
|
||||
}
|
||||
|
||||
func (i *Key) TableName() string {
|
||||
return "secrets"
|
||||
}
|
220
pkg/services/secrets/kvstore/sql.go
Normal file
220
pkg/services/secrets/kvstore/sql.go
Normal file
@ -0,0 +1,220 @@
|
||||
package kvstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/services/secrets"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore"
|
||||
)
|
||||
|
||||
// secretsKVStoreSQL provides a key/value store backed by the Grafana database
|
||||
type secretsKVStoreSQL struct {
|
||||
log log.Logger
|
||||
sqlStore sqlstore.Store
|
||||
secretsService secrets.Service
|
||||
decryptionCache decryptionCache
|
||||
}
|
||||
|
||||
type decryptionCache struct {
|
||||
cache map[int64]cachedDecrypted
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
type cachedDecrypted struct {
|
||||
updated time.Time
|
||||
value string
|
||||
}
|
||||
|
||||
var b64 = base64.RawStdEncoding
|
||||
|
||||
// Get an item from the store
|
||||
func (kv *secretsKVStoreSQL) Get(ctx context.Context, orgId int64, namespace string, typ string) (string, bool, error) {
|
||||
item := Item{
|
||||
OrgId: &orgId,
|
||||
Namespace: &namespace,
|
||||
Type: &typ,
|
||||
}
|
||||
var isFound bool
|
||||
var decryptedValue []byte
|
||||
|
||||
err := kv.sqlStore.WithDbSession(ctx, func(dbSession *sqlstore.DBSession) error {
|
||||
has, err := dbSession.Get(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error getting secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return err
|
||||
}
|
||||
if !has {
|
||||
kv.log.Debug("secret value not found", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
return nil
|
||||
}
|
||||
isFound = true
|
||||
kv.log.Debug("got secret value", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
return nil
|
||||
})
|
||||
|
||||
if err == nil && isFound {
|
||||
kv.decryptionCache.Lock()
|
||||
defer kv.decryptionCache.Unlock()
|
||||
|
||||
if cache, present := kv.decryptionCache.cache[item.Id]; present && item.Updated.Equal(cache.updated) {
|
||||
return cache.value, isFound, err
|
||||
}
|
||||
|
||||
decodedValue, err := b64.DecodeString(item.Value)
|
||||
if err != nil {
|
||||
kv.log.Debug("error decoding secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return string(decryptedValue), isFound, err
|
||||
}
|
||||
|
||||
decryptedValue, err = kv.secretsService.Decrypt(ctx, decodedValue)
|
||||
if err != nil {
|
||||
kv.log.Debug("error decrypting secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return string(decryptedValue), isFound, err
|
||||
}
|
||||
|
||||
kv.decryptionCache.cache[item.Id] = cachedDecrypted{
|
||||
updated: item.Updated,
|
||||
value: string(decryptedValue),
|
||||
}
|
||||
}
|
||||
|
||||
return string(decryptedValue), isFound, err
|
||||
}
|
||||
|
||||
// Set an item in the store
|
||||
func (kv *secretsKVStoreSQL) Set(ctx context.Context, orgId int64, namespace string, typ string, value string) error {
|
||||
encryptedValue, err := kv.secretsService.Encrypt(ctx, []byte(value), secrets.WithoutScope())
|
||||
if err != nil {
|
||||
kv.log.Debug("error encrypting secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return err
|
||||
}
|
||||
encodedValue := b64.EncodeToString(encryptedValue)
|
||||
return kv.sqlStore.WithTransactionalDbSession(ctx, func(dbSession *sqlstore.DBSession) error {
|
||||
item := Item{
|
||||
OrgId: &orgId,
|
||||
Namespace: &namespace,
|
||||
Type: &typ,
|
||||
}
|
||||
|
||||
has, err := dbSession.Get(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error checking secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
if has && item.Value == encodedValue {
|
||||
kv.log.Debug("secret value not changed", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
return nil
|
||||
}
|
||||
|
||||
item.Value = encodedValue
|
||||
item.Updated = time.Now()
|
||||
|
||||
if has {
|
||||
// if item already exists we update it
|
||||
_, err = dbSession.ID(item.Id).Update(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error updating secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
} else {
|
||||
kv.decryptionCache.cache[item.Id] = cachedDecrypted{
|
||||
updated: item.Updated,
|
||||
value: value,
|
||||
}
|
||||
kv.log.Debug("secret value updated", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// if item doesn't exist we create it
|
||||
item.Created = item.Updated
|
||||
_, err = dbSession.Insert(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error inserting secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
} else {
|
||||
kv.log.Debug("secret value inserted", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
}
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
// Del deletes an item from the store.
|
||||
func (kv *secretsKVStoreSQL) Del(ctx context.Context, orgId int64, namespace string, typ string) error {
|
||||
err := kv.sqlStore.WithDbSession(ctx, func(dbSession *sqlstore.DBSession) error {
|
||||
item := Item{
|
||||
OrgId: &orgId,
|
||||
Namespace: &namespace,
|
||||
Type: &typ,
|
||||
}
|
||||
|
||||
has, err := dbSession.Get(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error checking secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
if has {
|
||||
// if item exists we delete it
|
||||
_, err = dbSession.ID(item.Id).Delete(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error deleting secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
} else {
|
||||
delete(kv.decryptionCache.cache, item.Id)
|
||||
kv.log.Debug("secret value deleted", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// Keys get all keys for a given namespace. To query for all
|
||||
// organizations the constant 'kvstore.AllOrganizations' can be passed as orgId.
|
||||
func (kv *secretsKVStoreSQL) Keys(ctx context.Context, orgId int64, namespace string, typ string) ([]Key, error) {
|
||||
var keys []Key
|
||||
err := kv.sqlStore.WithDbSession(ctx, func(dbSession *sqlstore.DBSession) error {
|
||||
query := dbSession.Where("namespace = ?", namespace).And("type = ?", typ)
|
||||
if orgId != AllOrganizations {
|
||||
query.And("org_id = ?", orgId)
|
||||
}
|
||||
return query.Find(&keys)
|
||||
})
|
||||
return keys, err
|
||||
}
|
||||
|
||||
// Rename an item in the store
|
||||
func (kv *secretsKVStoreSQL) Rename(ctx context.Context, orgId int64, namespace string, typ string, newNamespace string) error {
|
||||
return kv.sqlStore.WithTransactionalDbSession(ctx, func(dbSession *sqlstore.DBSession) error {
|
||||
item := Item{
|
||||
OrgId: &orgId,
|
||||
Namespace: &namespace,
|
||||
Type: &typ,
|
||||
}
|
||||
|
||||
has, err := dbSession.Get(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error checking secret value", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
item.Namespace = &newNamespace
|
||||
item.Updated = time.Now()
|
||||
|
||||
if has {
|
||||
// if item already exists we update it
|
||||
_, err = dbSession.ID(item.Id).Update(&item)
|
||||
if err != nil {
|
||||
kv.log.Debug("error updating secret namespace", "orgId", orgId, "type", typ, "namespace", namespace, "err", err)
|
||||
} else {
|
||||
kv.log.Debug("secret namespace updated", "orgId", orgId, "type", typ, "namespace", namespace)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return err
|
||||
})
|
||||
}
|
@ -42,7 +42,7 @@ func TestServiceAccountsAPI_CreateServiceAccount(t *testing.T) {
|
||||
}()
|
||||
|
||||
orgCmd := &models.CreateOrgCommand{Name: "Some Test Org"}
|
||||
err := sqlstore.CreateOrg(context.Background(), orgCmd)
|
||||
err := store.CreateOrg(context.Background(), orgCmd)
|
||||
require.Nil(t, err)
|
||||
|
||||
type testCreateSATestCase struct {
|
||||
|
@ -479,7 +479,7 @@ func (ss *SQLStore) UpdateAlertNotificationWithUid(ctx context.Context, cmd *mod
|
||||
}
|
||||
|
||||
func (ss *SQLStore) SetAlertNotificationStateToCompleteCommand(ctx context.Context, cmd *models.SetAlertNotificationStateToCompleteCommand) error {
|
||||
return inTransactionCtx(ctx, func(sess *DBSession) error {
|
||||
return ss.WithTransactionalDbSession(ctx, func(sess *DBSession) error {
|
||||
version := cmd.Version
|
||||
var current models.AlertNotificationState
|
||||
if _, err := sess.ID(cmd.Id).Get(¤t); err != nil {
|
||||
@ -544,7 +544,7 @@ func (ss *SQLStore) SetAlertNotificationStateToPendingCommand(ctx context.Contex
|
||||
}
|
||||
|
||||
func (ss *SQLStore) GetOrCreateAlertNotificationState(ctx context.Context, cmd *models.GetOrCreateNotificationStateQuery) error {
|
||||
return inTransactionCtx(ctx, func(sess *DBSession) error {
|
||||
return ss.WithTransactionalDbSession(ctx, func(sess *DBSession) error {
|
||||
nj := &models.AlertNotificationState{}
|
||||
|
||||
exist, err := getAlertNotificationState(ctx, sess, cmd, nj)
|
||||
|
@ -42,7 +42,7 @@ func NewSQLAnnotationRepo(sql *SQLStore) SQLAnnotationRepo {
|
||||
}
|
||||
|
||||
func (r *SQLAnnotationRepo) Save(item *annotations.Item) error {
|
||||
return inTransaction(func(sess *DBSession) error {
|
||||
return r.sql.WithTransactionalDbSession(context.Background(), func(sess *DBSession) error {
|
||||
tags := models.ParseTagPairs(item.Tags)
|
||||
item.Tags = models.JoinTagPairs(tags)
|
||||
item.Created = timeNow().UnixNano() / int64(time.Millisecond)
|
||||
|
@ -13,6 +13,7 @@ import (
|
||||
type AnnotationCleanupService struct {
|
||||
batchSize int64
|
||||
log log.Logger
|
||||
sqlstore *SQLStore
|
||||
}
|
||||
|
||||
const (
|
||||
@ -92,7 +93,7 @@ func (acs *AnnotationCleanupService) executeUntilDoneOrCancelled(ctx context.Con
|
||||
return totalAffected, ctx.Err()
|
||||
default:
|
||||
var affected int64
|
||||
err := withDbSession(ctx, x, func(session *DBSession) error {
|
||||
err := withDbSession(ctx, acs.sqlstore.engine, func(session *DBSession) error {
|
||||
res, err := session.Exec(sql)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -87,7 +87,7 @@ func TestAnnotationCleanUp(t *testing.T) {
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
cleaner := &AnnotationCleanupService{batchSize: 1, log: log.New("test-logger")}
|
||||
cleaner := &AnnotationCleanupService{batchSize: 1, log: log.New("test-logger"), sqlstore: fakeSQL}
|
||||
affectedAnnotations, affectedAnnotationTags, err := cleaner.CleanAnnotations(context.Background(), test.cfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
@ -142,7 +142,7 @@ func TestOldAnnotationsAreDeletedFirst(t *testing.T) {
|
||||
require.NoError(t, err, "cannot insert annotation")
|
||||
|
||||
// run the clean up task to keep one annotation.
|
||||
cleaner := &AnnotationCleanupService{batchSize: 1, log: log.New("test-logger")}
|
||||
cleaner := &AnnotationCleanupService{batchSize: 1, log: log.New("test-logger"), sqlstore: fakeSQL}
|
||||
_, err = cleaner.cleanAnnotations(context.Background(), setting.AnnotationCleanupSettings{MaxCount: 1}, alertAnnotationType)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
@ -18,4 +18,24 @@ func addSecretsMigration(mg *migrator.Migrator) {
|
||||
}
|
||||
|
||||
mg.AddMigration("create data_keys table", migrator.NewAddTableMigration(dataKeysV1))
|
||||
|
||||
secretsV1 := migrator.Table{
|
||||
Name: "secrets",
|
||||
Columns: []*migrator.Column{
|
||||
{Name: "id", Type: migrator.DB_BigInt, IsPrimaryKey: true, IsAutoIncrement: true},
|
||||
{Name: "org_id", Type: migrator.DB_BigInt, Nullable: false},
|
||||
{Name: "namespace", Type: migrator.DB_NVarchar, Length: 255, Nullable: false},
|
||||
{Name: "type", Type: migrator.DB_NVarchar, Length: 255, Nullable: false},
|
||||
{Name: "value", Type: migrator.DB_Text, Nullable: true},
|
||||
{Name: "created", Type: migrator.DB_DateTime, Nullable: false},
|
||||
{Name: "updated", Type: migrator.DB_DateTime, Nullable: false},
|
||||
},
|
||||
Indices: []*migrator.Index{
|
||||
{Cols: []string{"org_id"}},
|
||||
{Cols: []string{"org_id", "namespace"}},
|
||||
{Cols: []string{"org_id", "namespace", "type"}, Type: migrator.UniqueIndex},
|
||||
},
|
||||
}
|
||||
|
||||
mg.AddMigration("create secrets table", migrator.NewAddTableMigration(secretsV1))
|
||||
}
|
||||
|
@ -117,6 +117,9 @@ func (m *SQLStoreMock) GetOrgByNameHandler(ctx context.Context, query *models.Ge
|
||||
func (m *SQLStoreMock) CreateOrgWithMember(name string, userID int64) (models.Org, error) {
|
||||
return *m.ExpectedOrg, nil
|
||||
}
|
||||
func (m *SQLStoreMock) CreateOrg(ctx context.Context, cmd *models.CreateOrgCommand) error {
|
||||
return m.ExpectedError
|
||||
}
|
||||
|
||||
func (m *SQLStoreMock) UpdateOrg(ctx context.Context, cmd *models.UpdateOrgCommand) error {
|
||||
return m.ExpectedError
|
||||
|
@ -149,8 +149,8 @@ func (ss *SQLStore) CreateOrgWithMember(name string, userID int64) (models.Org,
|
||||
return createOrg(name, userID, ss.engine)
|
||||
}
|
||||
|
||||
func CreateOrg(ctx context.Context, cmd *models.CreateOrgCommand) error {
|
||||
org, err := createOrg(cmd.Name, cmd.UserId, x)
|
||||
func (ss *SQLStore) CreateOrg(ctx context.Context, cmd *models.CreateOrgCommand) error {
|
||||
org, err := createOrg(cmd.Name, cmd.UserId, ss.engine)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -27,7 +27,7 @@ func TestAccountDataAccess(t *testing.T) {
|
||||
|
||||
for i := 1; i < 4; i++ {
|
||||
cmd = &models.CreateOrgCommand{Name: fmt.Sprint("Org #", i)}
|
||||
err = CreateOrg(context.Background(), cmd)
|
||||
err = sqlStore.CreateOrg(context.Background(), cmd)
|
||||
require.NoError(t, err)
|
||||
|
||||
ids = append(ids, cmd.Result.Id)
|
||||
@ -44,7 +44,7 @@ func TestAccountDataAccess(t *testing.T) {
|
||||
sqlStore = InitTestDB(t)
|
||||
for i := 1; i < 4; i++ {
|
||||
cmd := &models.CreateOrgCommand{Name: fmt.Sprint("Org #", i)}
|
||||
err := CreateOrg(context.Background(), cmd)
|
||||
err := sqlStore.CreateOrg(context.Background(), cmd)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
|
@ -49,7 +49,7 @@ func TestQuotaCommandsAndQueries(t *testing.T) {
|
||||
UserId: 1,
|
||||
}
|
||||
|
||||
err := CreateOrg(context.Background(), &userCmd)
|
||||
err := sqlStore.CreateOrg(context.Background(), &userCmd)
|
||||
require.NoError(t, err)
|
||||
orgId = userCmd.Result.Id
|
||||
|
||||
|
@ -32,7 +32,6 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
x *xorm.Engine
|
||||
dialect migrator.Dialect
|
||||
|
||||
sqlog log.Logger = log.New("sqlstore")
|
||||
@ -101,13 +100,11 @@ func newSQLStore(cfg *setting.Cfg, cacheService *localcache.CacheService, engine
|
||||
|
||||
ss.Dialect = migrator.NewDialect(ss.engine)
|
||||
|
||||
// temporarily still set global var
|
||||
x = ss.engine
|
||||
dialect = ss.Dialect
|
||||
|
||||
// Init repo instances
|
||||
annotations.SetRepository(&SQLAnnotationRepo{sql: ss})
|
||||
annotations.SetAnnotationCleaner(&AnnotationCleanupService{batchSize: ss.Cfg.AnnotationCleanupJobBatchSize, log: log.New("annotationcleaner")})
|
||||
annotations.SetAnnotationCleaner(&AnnotationCleanupService{batchSize: ss.Cfg.AnnotationCleanupJobBatchSize, log: log.New("annotationcleaner"), sqlstore: ss})
|
||||
|
||||
// if err := ss.Reset(); err != nil {
|
||||
// return nil, err
|
||||
|
@ -19,6 +19,7 @@ type Store interface {
|
||||
HasEditPermissionInFolders(ctx context.Context, query *models.HasEditPermissionInFoldersQuery) error
|
||||
SearchDashboardSnapshots(ctx context.Context, query *models.GetDashboardSnapshotsQuery) error
|
||||
GetOrgByName(name string) (*models.Org, error)
|
||||
CreateOrg(ctx context.Context, cmd *models.CreateOrgCommand) error
|
||||
CreateOrgWithMember(name string, userID int64) (models.Org, error)
|
||||
UpdateOrg(ctx context.Context, cmd *models.UpdateOrgCommand) error
|
||||
UpdateOrgAddress(ctx context.Context, cmd *models.UpdateOrgAddressCommand) error
|
||||
|
@ -351,7 +351,7 @@ func getTeamMember(sess *DBSession, orgId int64, teamId int64, userId int64) (mo
|
||||
|
||||
// UpdateTeamMember updates a team member
|
||||
func (ss *SQLStore) UpdateTeamMember(ctx context.Context, cmd *models.UpdateTeamMemberCommand) error {
|
||||
return inTransaction(func(sess *DBSession) error {
|
||||
return ss.WithTransactionalDbSession(ctx, func(sess *DBSession) error {
|
||||
return updateTeamMember(sess, cmd.OrgId, cmd.TeamId, cmd.UserId, cmd.Permission)
|
||||
})
|
||||
}
|
||||
@ -437,7 +437,7 @@ func updateTeamMember(sess *DBSession, orgID, teamID, userID int64, permission m
|
||||
|
||||
// RemoveTeamMember removes a member from a team
|
||||
func (ss *SQLStore) RemoveTeamMember(ctx context.Context, cmd *models.RemoveTeamMemberCommand) error {
|
||||
return inTransaction(func(sess *DBSession) error {
|
||||
return ss.WithTransactionalDbSession(ctx, func(sess *DBSession) error {
|
||||
return removeTeamMember(sess, cmd)
|
||||
})
|
||||
}
|
||||
|
@ -32,8 +32,8 @@ func (ss *SQLStore) inTransactionWithRetry(ctx context.Context, fn func(ctx cont
|
||||
}, retry)
|
||||
}
|
||||
|
||||
func inTransactionWithRetry(callback DBTransactionFunc, retry int) error {
|
||||
return inTransactionWithRetryCtx(context.Background(), x, callback, retry)
|
||||
func inTransactionWithRetry(callback DBTransactionFunc, engine *xorm.Engine, retry int) error {
|
||||
return inTransactionWithRetryCtx(context.Background(), engine, callback, retry)
|
||||
}
|
||||
|
||||
func inTransactionWithRetryCtx(ctx context.Context, engine *xorm.Engine, callback DBTransactionFunc, retry int) error {
|
||||
@ -68,7 +68,7 @@ func inTransactionWithRetryCtx(ctx context.Context, engine *xorm.Engine, callbac
|
||||
|
||||
time.Sleep(time.Millisecond * time.Duration(10))
|
||||
sqlog.Info("Database locked, sleeping then retrying", "error", err, "retry", retry)
|
||||
return inTransactionWithRetry(callback, retry+1)
|
||||
return inTransactionWithRetry(callback, engine, retry+1)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@ -91,11 +91,3 @@ func inTransactionWithRetryCtx(ctx context.Context, engine *xorm.Engine, callbac
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func inTransaction(callback DBTransactionFunc) error {
|
||||
return inTransactionWithRetry(callback, 0)
|
||||
}
|
||||
|
||||
func inTransactionCtx(ctx context.Context, callback DBTransactionFunc) error {
|
||||
return inTransactionWithRetryCtx(ctx, x, callback, 0)
|
||||
}
|
||||
|
@ -80,7 +80,7 @@ func TestUserDataAccess(t *testing.T) {
|
||||
}()
|
||||
|
||||
orgCmd := &models.CreateOrgCommand{Name: "Some Test Org"}
|
||||
err := CreateOrg(context.Background(), orgCmd)
|
||||
err := ss.CreateOrg(context.Background(), orgCmd)
|
||||
require.Nil(t, err)
|
||||
|
||||
cmd := models.CreateUserCommand{
|
||||
|
@ -8,6 +8,7 @@ import (
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
@ -32,7 +33,8 @@ type AzureMonitorDatasource struct {
|
||||
|
||||
var (
|
||||
// Used to convert the aggregation value to the Azure enum for deep linking
|
||||
aggregationTypeMap = map[string]int{"None": 0, "Total": 1, "Minimum": 2, "Maximum": 3, "Average": 4, "Count": 7}
|
||||
aggregationTypeMap = map[string]int{"None": 0, "Total": 1, "Minimum": 2, "Maximum": 3, "Average": 4, "Count": 7}
|
||||
resourceNameLandmark = regexp.MustCompile(`(?i)(/(?P<resourceName>[\w-\.]+)/providers/Microsoft\.Insights/metrics)`)
|
||||
)
|
||||
|
||||
const azureMonitorAPIVersion = "2018-01-01"
|
||||
@ -74,12 +76,6 @@ func (e *AzureMonitorDatasource) buildQueries(queries []backend.DataQuery, dsInf
|
||||
|
||||
azJSONModel := queryJSONModel.AzureMonitor
|
||||
|
||||
urlComponents := map[string]string{}
|
||||
urlComponents["subscription"] = queryJSONModel.Subscription
|
||||
urlComponents["resourceGroup"] = azJSONModel.ResourceGroup
|
||||
urlComponents["metricDefinition"] = azJSONModel.MetricDefinition
|
||||
urlComponents["resourceName"] = azJSONModel.ResourceName
|
||||
|
||||
ub := urlBuilder{
|
||||
ResourceURI: azJSONModel.ResourceURI,
|
||||
// Legacy, used to reconstruct resource URI if it's not present
|
||||
@ -91,6 +87,19 @@ func (e *AzureMonitorDatasource) buildQueries(queries []backend.DataQuery, dsInf
|
||||
}
|
||||
azureURL := ub.BuildMetricsURL()
|
||||
|
||||
resourceName := azJSONModel.ResourceName
|
||||
if resourceName == "" {
|
||||
resourceName = extractResourceNameFromMetricsURL(azureURL)
|
||||
}
|
||||
|
||||
urlComponents := map[string]string{}
|
||||
urlComponents["resourceURI"] = azJSONModel.ResourceURI
|
||||
// Legacy fields used for constructing a deep link to display the query in Azure Portal.
|
||||
urlComponents["subscription"] = queryJSONModel.Subscription
|
||||
urlComponents["resourceGroup"] = azJSONModel.ResourceGroup
|
||||
urlComponents["metricDefinition"] = azJSONModel.MetricDefinition
|
||||
urlComponents["resourceName"] = resourceName
|
||||
|
||||
alias := azJSONModel.Alias
|
||||
|
||||
timeGrain := azJSONModel.TimeGrain
|
||||
@ -338,12 +347,18 @@ func getQueryUrl(query *types.AzureMonitorQuery, azurePortalUrl string) (string,
|
||||
}
|
||||
escapedTime := url.QueryEscape(string(timespan))
|
||||
|
||||
id := fmt.Sprintf("/subscriptions/%v/resourceGroups/%v/providers/%v/%v",
|
||||
query.UrlComponents["subscription"],
|
||||
query.UrlComponents["resourceGroup"],
|
||||
query.UrlComponents["metricDefinition"],
|
||||
query.UrlComponents["resourceName"],
|
||||
)
|
||||
id := query.UrlComponents["resourceURI"]
|
||||
|
||||
if id == "" {
|
||||
ub := urlBuilder{
|
||||
Subscription: query.UrlComponents["subscription"],
|
||||
ResourceGroup: query.UrlComponents["resourceGroup"],
|
||||
MetricDefinition: query.UrlComponents["metricDefinition"],
|
||||
ResourceName: query.UrlComponents["resourceName"],
|
||||
}
|
||||
id = ub.buildResourceURIFromLegacyQuery()
|
||||
}
|
||||
|
||||
chartDef, err := json.Marshal(map[string]interface{}{
|
||||
"v2charts": []interface{}{
|
||||
map[string]interface{}{
|
||||
@ -467,3 +482,20 @@ func toGrafanaUnit(unit string) string {
|
||||
// 1. Do not have a corresponding unit in Grafana's current list.
|
||||
// 2. Do not have the unit listed in any of Azure Monitor's supported metrics anyways.
|
||||
}
|
||||
|
||||
func extractResourceNameFromMetricsURL(url string) string {
|
||||
matches := resourceNameLandmark.FindStringSubmatch(url)
|
||||
resourceName := ""
|
||||
|
||||
if matches == nil {
|
||||
return resourceName
|
||||
}
|
||||
|
||||
for i, name := range resourceNameLandmark.SubexpNames() {
|
||||
if name == "resourceName" {
|
||||
resourceName = matches[i]
|
||||
}
|
||||
}
|
||||
|
||||
return resourceName
|
||||
}
|
||||
|
@ -38,19 +38,22 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorVariedProperties map[string]interface{}
|
||||
azureMonitorQueryTarget string
|
||||
expectedInterval string
|
||||
resourceURI string
|
||||
queryInterval time.Duration
|
||||
}{
|
||||
{
|
||||
name: "Parse queries from frontend and build AzureMonitor API queries",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "PT1M",
|
||||
"top": "10",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
"timeGrain": "PT1M",
|
||||
"top": "10",
|
||||
},
|
||||
resourceURI: "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
expectedInterval: "PT1M",
|
||||
azureMonitorQueryTarget: "aggregation=Average&api-version=2018-01-01&interval=PT1M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z",
|
||||
},
|
||||
{
|
||||
name: "time grain set to auto",
|
||||
name: "legacy query without resourceURI and time grain set to auto",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "auto",
|
||||
"top": "10",
|
||||
@ -60,7 +63,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQueryTarget: "aggregation=Average&api-version=2018-01-01&interval=PT15M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z",
|
||||
},
|
||||
{
|
||||
name: "time grain set to auto",
|
||||
name: "legacy query without resourceURI and time grain set to auto",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "auto",
|
||||
"allowedTimeGrainsMs": []int64{60000, 300000},
|
||||
@ -71,7 +74,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQueryTarget: "aggregation=Average&api-version=2018-01-01&interval=PT5M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z",
|
||||
},
|
||||
{
|
||||
name: "has a dimension filter",
|
||||
name: "legacy query without resourceURI and has a dimension filter",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "PT1M",
|
||||
"dimension": "blob",
|
||||
@ -83,7 +86,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQueryTarget: "%24filter=blob+eq+%27%2A%27&aggregation=Average&api-version=2018-01-01&interval=PT1M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z&top=30",
|
||||
},
|
||||
{
|
||||
name: "has a dimension filter and none Dimension",
|
||||
name: "legacy query without resourceURI and has a dimension filter and none Dimension",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "PT1M",
|
||||
"dimension": "None",
|
||||
@ -95,7 +98,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQueryTarget: "aggregation=Average&api-version=2018-01-01&interval=PT1M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z",
|
||||
},
|
||||
{
|
||||
name: "has dimensionFilter*s* property with one dimension",
|
||||
name: "legacy query without resourceURI and has dimensionFilter*s* property with one dimension",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "PT1M",
|
||||
"dimensionFilters": []types.AzureMonitorDimensionFilter{{Dimension: "blob", Operator: "eq", Filter: "*"}},
|
||||
@ -106,7 +109,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQueryTarget: "%24filter=blob+eq+%27%2A%27&aggregation=Average&api-version=2018-01-01&interval=PT1M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z&top=30",
|
||||
},
|
||||
{
|
||||
name: "has dimensionFilter*s* property with two dimensions",
|
||||
name: "legacy query without resourceURI and has dimensionFilter*s* property with two dimensions",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "PT1M",
|
||||
"dimensionFilters": []types.AzureMonitorDimensionFilter{{Dimension: "blob", Operator: "eq", Filter: "*"}, {Dimension: "tier", Operator: "eq", Filter: "*"}},
|
||||
@ -117,7 +120,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQueryTarget: "%24filter=blob+eq+%27%2A%27+and+tier+eq+%27%2A%27&aggregation=Average&api-version=2018-01-01&interval=PT1M&metricnames=Percentage+CPU&metricnamespace=Microsoft.Compute-virtualMachines×pan=2018-03-15T13%3A00%3A00Z%2F2018-03-15T13%3A34%3A00Z&top=30",
|
||||
},
|
||||
{
|
||||
name: "has a dimension filter without specifying a top",
|
||||
name: "legacy query without resourceURI and has a dimension filter without specifying a top",
|
||||
azureMonitorVariedProperties: map[string]interface{}{
|
||||
"timeGrain": "PT1M",
|
||||
"dimension": "blob",
|
||||
@ -165,6 +168,7 @@ func TestAzureMonitorBuildQueries(t *testing.T) {
|
||||
azureMonitorQuery := &types.AzureMonitorQuery{
|
||||
URL: "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana/providers/microsoft.insights/metrics",
|
||||
UrlComponents: map[string]string{
|
||||
"resourceURI": tt.resourceURI,
|
||||
"metricDefinition": "Microsoft.Compute/virtualMachines",
|
||||
"resourceGroup": "grafanastaging",
|
||||
"resourceName": "grafana",
|
||||
@ -214,19 +218,19 @@ func makeTestDataLink(url string) data.DataLink {
|
||||
func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
// datalinks for the test frames
|
||||
averageLink := makeTestDataLink(`http://ds/#blade/Microsoft_Azure_MonitoringMetrics/Metrics.ReactView/Referer/MetricsExplorer/TimeContext/%7B%22absolute%22%3A%7B%22startTime%22%3A%220001-01-01T00%3A00%3A00Z%22%2C%22endTime%22%3A%220001-01-01T00%3A00%3A00Z%22%7D%7D/` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F%2FresourceGroups%2F%2Fproviders%2F%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A4%2C%22namespace%22%3A%22%22%2C` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-123456789abc%2FresourceGroups%2Fgrafanastaging%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A4%2C%22namespace%22%3A%22%22%2C` +
|
||||
`%22metricVisualization%22%3A%7B%22displayName%22%3A%22%22%2C%22resourceDisplayName%22%3A%22grafana%22%7D%7D%5D%7D%5D%7D`)
|
||||
totalLink := makeTestDataLink(`http://ds/#blade/Microsoft_Azure_MonitoringMetrics/Metrics.ReactView/Referer/MetricsExplorer/TimeContext/%7B%22absolute%22%3A%7B%22startTime%22%3A%220001-01-01T00%3A00%3A00Z%22%2C%22endTime%22%3A%220001-01-01T00%3A00%3A00Z%22%7D%7D/` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F%2FresourceGroups%2F%2Fproviders%2F%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A1%2C%22namespace%22%3A%22%22%2C` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-123456789abc%2FresourceGroups%2Fgrafanastaging%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A1%2C%22namespace%22%3A%22%22%2C` +
|
||||
`%22metricVisualization%22%3A%7B%22displayName%22%3A%22%22%2C%22resourceDisplayName%22%3A%22grafana%22%7D%7D%5D%7D%5D%7D`)
|
||||
maxLink := makeTestDataLink(`http://ds/#blade/Microsoft_Azure_MonitoringMetrics/Metrics.ReactView/Referer/MetricsExplorer/TimeContext/%7B%22absolute%22%3A%7B%22startTime%22%3A%220001-01-01T00%3A00%3A00Z%22%2C%22endTime%22%3A%220001-01-01T00%3A00%3A00Z%22%7D%7D/` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F%2FresourceGroups%2F%2Fproviders%2F%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A3%2C%22namespace%22%3A%22%22%2C` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-123456789abc%2FresourceGroups%2Fgrafanastaging%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A3%2C%22namespace%22%3A%22%22%2C` +
|
||||
`%22metricVisualization%22%3A%7B%22displayName%22%3A%22%22%2C%22resourceDisplayName%22%3A%22grafana%22%7D%7D%5D%7D%5D%7D`)
|
||||
minLink := makeTestDataLink(`http://ds/#blade/Microsoft_Azure_MonitoringMetrics/Metrics.ReactView/Referer/MetricsExplorer/TimeContext/%7B%22absolute%22%3A%7B%22startTime%22%3A%220001-01-01T00%3A00%3A00Z%22%2C%22endTime%22%3A%220001-01-01T00%3A00%3A00Z%22%7D%7D/` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F%2FresourceGroups%2F%2Fproviders%2F%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A2%2C%22namespace%22%3A%22%22%2C` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-123456789abc%2FresourceGroups%2Fgrafanastaging%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A2%2C%22namespace%22%3A%22%22%2C` +
|
||||
`%22metricVisualization%22%3A%7B%22displayName%22%3A%22%22%2C%22resourceDisplayName%22%3A%22grafana%22%7D%7D%5D%7D%5D%7D`)
|
||||
countLink := makeTestDataLink(`http://ds/#blade/Microsoft_Azure_MonitoringMetrics/Metrics.ReactView/Referer/MetricsExplorer/TimeContext/%7B%22absolute%22%3A%7B%22startTime%22%3A%220001-01-01T00%3A00%3A00Z%22%2C%22endTime%22%3A%220001-01-01T00%3A00%3A00Z%22%7D%7D/` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F%2FresourceGroups%2F%2Fproviders%2F%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A7%2C%22namespace%22%3A%22%22%2C` +
|
||||
`ChartDefinition/%7B%22v2charts%22%3A%5B%7B%22metrics%22%3A%5B%7B%22resourceMetadata%22%3A%7B%22id%22%3A%22%2Fsubscriptions%2F12345678-aaaa-bbbb-cccc-123456789abc%2FresourceGroups%2Fgrafanastaging%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2Fgrafana%22%7D%2C%22name%22%3A%22%22%2C%22aggregationType%22%3A7%2C%22namespace%22%3A%22%22%2C` +
|
||||
`%22metricVisualization%22%3A%7B%22displayName%22%3A%22%22%2C%22resourceDisplayName%22%3A%22grafana%22%7D%7D%5D%7D%5D%7D`)
|
||||
|
||||
tests := []struct {
|
||||
@ -242,6 +246,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Average"},
|
||||
@ -263,6 +268,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Total"},
|
||||
@ -284,6 +290,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Maximum"},
|
||||
@ -305,6 +312,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Minimum"},
|
||||
@ -326,6 +334,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Count"},
|
||||
@ -347,6 +356,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Average"},
|
||||
@ -382,6 +392,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
Alias: "custom {{resourcegroup}} {{namespace}} {{resourceName}} {{metric}}",
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Total"},
|
||||
@ -404,6 +415,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
Alias: "{{dimensionname}}={{DimensionValue}}",
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Average"},
|
||||
@ -441,6 +453,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
Alias: "{{resourcegroup}} {Blob Type={{blobtype}}, Tier={{Tier}}}",
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Average"},
|
||||
@ -479,6 +492,7 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
Alias: "custom",
|
||||
UrlComponents: map[string]string{
|
||||
"resourceName": "grafana",
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Average"},
|
||||
@ -494,6 +508,57 @@ func TestAzureMonitorParseResponse(t *testing.T) {
|
||||
}).SetConfig(&data.FieldConfig{DisplayName: "custom", Links: []data.DataLink{averageLink}})),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "with legacy azure monitor query properties and without a resource uri",
|
||||
responseFile: "2-azure-monitor-response-total.json",
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
Alias: "custom {{resourcegroup}} {{namespace}} {{resourceName}} {{metric}}",
|
||||
UrlComponents: map[string]string{
|
||||
"subscription": "12345678-aaaa-bbbb-cccc-123456789abc",
|
||||
"resourceGroup": "grafanastaging",
|
||||
"metricDefinition": "Microsoft.Compute/virtualMachines",
|
||||
"resourceName": "grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Total"},
|
||||
},
|
||||
},
|
||||
expectedFrames: data.Frames{
|
||||
data.NewFrame("",
|
||||
data.NewField("Time", nil,
|
||||
makeDates(time.Date(2019, 2, 9, 13, 29, 0, 0, time.UTC), 5, time.Minute),
|
||||
).SetConfig(&data.FieldConfig{Links: []data.DataLink{totalLink}}),
|
||||
data.NewField("Percentage CPU", nil, []*float64{
|
||||
ptr.Float64(8.26), ptr.Float64(8.7), ptr.Float64(14.82), ptr.Float64(10.07), ptr.Float64(8.52),
|
||||
}).SetConfig(&data.FieldConfig{Unit: "percent", DisplayName: "custom grafanastaging Microsoft.Compute/virtualMachines grafana Percentage CPU", Links: []data.DataLink{totalLink}})),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "with legacy azure monitor query properties and with a resource uri it should use the resource uri",
|
||||
responseFile: "2-azure-monitor-response-total.json",
|
||||
mockQuery: &types.AzureMonitorQuery{
|
||||
Alias: "custom {{resourcegroup}} {{namespace}} {{resourceName}} {{metric}}",
|
||||
UrlComponents: map[string]string{
|
||||
"resourceURI": "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/grafana",
|
||||
"subscription": "12345678-aaaa-bbbb-cccc-123456789abc-nope",
|
||||
"resourceGroup": "grafanastaging-nope",
|
||||
"metricDefinition": "Microsoft.Compute/virtualMachines-nope",
|
||||
"resourceName": "grafana",
|
||||
},
|
||||
Params: url.Values{
|
||||
"aggregation": {"Total"},
|
||||
},
|
||||
},
|
||||
expectedFrames: data.Frames{
|
||||
data.NewFrame("",
|
||||
data.NewField("Time", nil,
|
||||
makeDates(time.Date(2019, 2, 9, 13, 29, 0, 0, time.UTC), 5, time.Minute),
|
||||
).SetConfig(&data.FieldConfig{Links: []data.DataLink{totalLink}}),
|
||||
data.NewField("Percentage CPU", nil, []*float64{
|
||||
ptr.Float64(8.26), ptr.Float64(8.7), ptr.Float64(14.82), ptr.Float64(10.07), ptr.Float64(8.52),
|
||||
}).SetConfig(&data.FieldConfig{Unit: "percent", DisplayName: "custom grafanastaging Microsoft.Compute/virtualMachines grafana Percentage CPU", Links: []data.DataLink{totalLink}})),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
datasource := &AzureMonitorDatasource{}
|
||||
@ -609,3 +674,21 @@ func TestAzureMonitorCreateRequest(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractResourceNameFromMetricsURL(t *testing.T) {
|
||||
t.Run("it should extract the resourceName from a well-formed Metrics URL", func(t *testing.T) {
|
||||
url := "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/Grafana-Test.VM/providers/microsoft.insights/metrics"
|
||||
expected := "Grafana-Test.VM"
|
||||
require.Equal(t, expected, extractResourceNameFromMetricsURL((url)))
|
||||
})
|
||||
t.Run("it should extract the resourceName from a well-formed Metrics URL in a case insensitive manner", func(t *testing.T) {
|
||||
url := "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/Grafana-Test.VM/pRoViDeRs/MiCrOsOfT.iNsIgHtS/mEtRiCs"
|
||||
expected := "Grafana-Test.VM"
|
||||
require.Equal(t, expected, extractResourceNameFromMetricsURL((url)))
|
||||
})
|
||||
t.Run("it should return an empty string if no match is found", func(t *testing.T) {
|
||||
url := "/subscriptions/12345678-aaaa-bbbb-cccc-123456789abc/resourceGroups/grafanastaging/providers/Microsoft.Compute/virtualMachines/Grafana-Test.VM/providers/microsoft.insights/nope-this-part-does-not-match"
|
||||
expected := ""
|
||||
require.Equal(t, expected, extractResourceNameFromMetricsURL((url)))
|
||||
})
|
||||
}
|
||||
|
@ -18,7 +18,7 @@ type urlBuilder struct {
|
||||
ResourceName string
|
||||
}
|
||||
|
||||
func (params *urlBuilder) buildMetricsURLFromLegacyQuery() string {
|
||||
func (params *urlBuilder) buildResourceURIFromLegacyQuery() string {
|
||||
subscription := params.Subscription
|
||||
|
||||
if params.Subscription == "" {
|
||||
@ -54,7 +54,7 @@ func (params *urlBuilder) BuildMetricsURL() string {
|
||||
|
||||
// Prior to Grafana 9, we had a legacy query object rather than a resourceURI, so we manually create the resource URI
|
||||
if resourceURI == "" {
|
||||
resourceURI = params.buildMetricsURLFromLegacyQuery()
|
||||
resourceURI = params.buildResourceURIFromLegacyQuery()
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s/providers/microsoft.insights/metrics", resourceURI)
|
||||
|
@ -40,6 +40,11 @@ func (h *Service) HandleRequest(ctx context.Context, ds *models.DataSource, quer
|
||||
return legacydata.DataResponse{}, err
|
||||
}
|
||||
|
||||
decryptedValues, err := h.dataSourcesService.DecryptedValues(ctx, ds)
|
||||
if err != nil {
|
||||
return legacydata.DataResponse{}, err
|
||||
}
|
||||
|
||||
instanceSettings := &backend.DataSourceInstanceSettings{
|
||||
ID: ds.Id,
|
||||
Name: ds.Name,
|
||||
@ -49,7 +54,7 @@ func (h *Service) HandleRequest(ctx context.Context, ds *models.DataSource, quer
|
||||
BasicAuthEnabled: ds.BasicAuth,
|
||||
BasicAuthUser: ds.BasicAuthUser,
|
||||
JSONData: jsonDataBytes,
|
||||
DecryptedSecureJSONData: h.dataSourcesService.DecryptedValues(ds),
|
||||
DecryptedSecureJSONData: decryptedValues,
|
||||
Updated: ds.Updated,
|
||||
UID: ds.Uid,
|
||||
}
|
||||
|
@ -13,6 +13,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
"github.com/grafana/grafana/pkg/services/oauthtoken"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/fakes"
|
||||
"github.com/grafana/grafana/pkg/services/secrets/kvstore"
|
||||
secretsManager "github.com/grafana/grafana/pkg/services/secrets/manager"
|
||||
"github.com/grafana/grafana/pkg/setting"
|
||||
"github.com/grafana/grafana/pkg/tsdb/legacydata"
|
||||
@ -38,8 +39,9 @@ func TestHandleRequest(t *testing.T) {
|
||||
actualReq = req
|
||||
return backend.NewQueryDataResponse(), nil
|
||||
}
|
||||
secretsStore := kvstore.SetupTestService(t)
|
||||
secretsService := secretsManager.SetupTestService(t, fakes.NewFakeSecretsStore())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
dsService := datasourceservice.ProvideService(nil, secretsService, secretsStore, cfg, featuremgmt.WithFeatures(), acmock.New(), acmock.NewPermissionsServicesMock())
|
||||
s := ProvideService(client, nil, dsService)
|
||||
|
||||
ds := &models.DataSource{Id: 12, Type: "unregisteredType", JsonData: simplejson.New()}
|
||||
|
@ -96,7 +96,6 @@ export const NavBarNext = React.memo(() => {
|
||||
<ul className={styles.itemList}>
|
||||
<NavBarItemWithoutMenu
|
||||
elClassName={styles.grafanaLogoInner}
|
||||
isActive={isMatchOrChildMatch(homeItem, activeItem)}
|
||||
label="Home"
|
||||
className={styles.grafanaLogo}
|
||||
url={homeItem.url}
|
||||
|
62
public/app/features/admin/ExportStartButton.tsx
Normal file
62
public/app/features/admin/ExportStartButton.tsx
Normal file
@ -0,0 +1,62 @@
|
||||
import { css } from '@emotion/css';
|
||||
import React, { useState } from 'react';
|
||||
|
||||
import { GrafanaTheme2 } from '@grafana/data';
|
||||
import { getBackendSrv } from '@grafana/runtime';
|
||||
import { Button, CodeEditor, Modal, useTheme2 } from '@grafana/ui';
|
||||
|
||||
export const ExportStartButton = () => {
|
||||
const styles = getStyles(useTheme2());
|
||||
const [open, setOpen] = useState(false);
|
||||
const [body, setBody] = useState({
|
||||
format: 'git',
|
||||
git: {},
|
||||
});
|
||||
const onDismiss = () => setOpen(false);
|
||||
const doStart = () => {
|
||||
getBackendSrv()
|
||||
.post('/api/admin/export', body)
|
||||
.then((v) => {
|
||||
console.log('GOT', v);
|
||||
onDismiss();
|
||||
});
|
||||
};
|
||||
|
||||
return (
|
||||
<>
|
||||
<Modal title={'Export grafana instance'} isOpen={open} onDismiss={onDismiss}>
|
||||
<div className={styles.wrap}>
|
||||
<CodeEditor
|
||||
height={200}
|
||||
value={JSON.stringify(body, null, 2) ?? ''}
|
||||
showLineNumbers={false}
|
||||
readOnly={false}
|
||||
language="json"
|
||||
showMiniMap={false}
|
||||
onBlur={(text: string) => {
|
||||
setBody(JSON.parse(text)); // force JSON?
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
<Modal.ButtonRow>
|
||||
<Button onClick={doStart}>Start</Button>
|
||||
<Button variant="secondary" onClick={onDismiss}>
|
||||
Cancel
|
||||
</Button>
|
||||
</Modal.ButtonRow>
|
||||
</Modal>
|
||||
|
||||
<Button onClick={() => setOpen(true)} variant="primary">
|
||||
Export
|
||||
</Button>
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
||||
const getStyles = (theme: GrafanaTheme2) => {
|
||||
return {
|
||||
wrap: css`
|
||||
border: 2px solid #111;
|
||||
`,
|
||||
};
|
||||
};
|
82
public/app/features/admin/ExportStatus.tsx
Normal file
82
public/app/features/admin/ExportStatus.tsx
Normal file
@ -0,0 +1,82 @@
|
||||
import { css } from '@emotion/css';
|
||||
import React, { useEffect, useState } from 'react';
|
||||
|
||||
import { GrafanaTheme2, isLiveChannelMessageEvent, isLiveChannelStatusEvent, LiveChannelScope } from '@grafana/data';
|
||||
import { getBackendSrv, getGrafanaLiveSrv } from '@grafana/runtime';
|
||||
import { Button, useTheme2 } from '@grafana/ui';
|
||||
|
||||
import { ExportStartButton } from './ExportStartButton';
|
||||
|
||||
interface ExportStatusMessage {
|
||||
running: boolean;
|
||||
target: string;
|
||||
started: number;
|
||||
finished: number;
|
||||
update: number;
|
||||
count: number;
|
||||
current: number;
|
||||
last: string;
|
||||
status: string;
|
||||
}
|
||||
|
||||
export const ExportStatus = () => {
|
||||
const styles = getStyles(useTheme2());
|
||||
const [status, setStatus] = useState<ExportStatusMessage>();
|
||||
|
||||
useEffect(() => {
|
||||
const subscription = getGrafanaLiveSrv()
|
||||
.getStream<ExportStatusMessage>({
|
||||
scope: LiveChannelScope.Grafana,
|
||||
namespace: 'broadcast',
|
||||
path: 'export',
|
||||
})
|
||||
.subscribe({
|
||||
next: (evt) => {
|
||||
if (isLiveChannelMessageEvent(evt)) {
|
||||
setStatus(evt.message);
|
||||
} else if (isLiveChannelStatusEvent(evt)) {
|
||||
setStatus(evt.message);
|
||||
}
|
||||
},
|
||||
});
|
||||
return () => {
|
||||
subscription.unsubscribe();
|
||||
};
|
||||
}, []);
|
||||
|
||||
if (!status) {
|
||||
return (
|
||||
<div className={styles.wrap}>
|
||||
<ExportStartButton />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className={styles.wrap}>
|
||||
<pre>{JSON.stringify(status, null, 2)}</pre>
|
||||
{Boolean(!status.running) && <ExportStartButton />}
|
||||
{Boolean(status.running) && (
|
||||
<Button
|
||||
variant="secondary"
|
||||
onClick={() => {
|
||||
getBackendSrv().post('/api/admin/export/stop');
|
||||
}}
|
||||
>
|
||||
Stop
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
const getStyles = (theme: GrafanaTheme2) => {
|
||||
return {
|
||||
wrap: css`
|
||||
border: 4px solid red;
|
||||
`,
|
||||
running: css`
|
||||
border: 4px solid green;
|
||||
`,
|
||||
};
|
||||
};
|
@ -10,6 +10,7 @@ import { contextSrv } from '../../core/services/context_srv';
|
||||
import { Loader } from '../plugins/admin/components/Loader';
|
||||
|
||||
import { CrawlerStatus } from './CrawlerStatus';
|
||||
import { ExportStatus } from './ExportStatus';
|
||||
import { getServerStats, ServerStat } from './state/apis';
|
||||
|
||||
export const ServerStats = () => {
|
||||
@ -98,6 +99,7 @@ export const ServerStats = () => {
|
||||
)}
|
||||
|
||||
{config.featureToggles.dashboardPreviews && config.featureToggles.dashboardPreviewsAdmin && <CrawlerStatus />}
|
||||
{config.featureToggles.export && <ExportStatus />}
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
@ -416,6 +416,7 @@ export class QueryEditorRow<TQuery extends DataQuery> extends PureComponent<Prop
|
||||
<OperationRowHelp>
|
||||
<DatasourceCheatsheet
|
||||
onClickExample={(query) => this.onClickExample(query)}
|
||||
query={this.props.query}
|
||||
datasource={datasource}
|
||||
/>
|
||||
</OperationRowHelp>
|
||||
|
@ -229,8 +229,15 @@ export default class LogsCheatSheet extends PureComponent<
|
||||
<div
|
||||
className="cheat-sheet-item__example"
|
||||
key={expr}
|
||||
onClick={(e) =>
|
||||
this.onClickExample({ refId: 'A', expression: expr, queryMode: 'Logs', region: 'default', id: 'A' })
|
||||
onClick={() =>
|
||||
this.onClickExample({
|
||||
refId: this.props.query.refId ?? 'A',
|
||||
expression: expr,
|
||||
queryMode: 'Logs',
|
||||
region: this.props.query.region,
|
||||
id: this.props.query.refId ?? 'A',
|
||||
logGroupNames: 'logGroupNames' in this.props.query ? this.props.query.logGroupNames : [],
|
||||
})
|
||||
}
|
||||
>
|
||||
<pre>{renderHighlightedMarkup(expr, keyPrefix)}</pre>
|
||||
|
@ -10,9 +10,9 @@ type Result = { frames: DataFrameJSON[]; error?: string };
|
||||
/**
|
||||
* A retry strategy specifically for cloud watch logs query. Cloud watch logs queries need first starting the query
|
||||
* and the polling for the results. The start query can fail because of the concurrent queries rate limit,
|
||||
* and so we hove to retry the start query call if there is already lot of queries running.
|
||||
* and so we have to retry the start query call if there is already lot of queries running.
|
||||
*
|
||||
* As we send multiple queries in single request some can fail and some can succeed and we have to also handle those
|
||||
* As we send multiple queries in a single request some can fail and some can succeed and we have to also handle those
|
||||
* cases by only retrying the failed queries. We retry the failed queries until we hit the time limit or all queries
|
||||
* succeed and only then we pass the data forward. This means we wait longer but makes the code a bit simpler as we
|
||||
* can treat starting the query and polling as steps in a pipeline.
|
||||
|
Loading…
Reference in New Issue
Block a user