mirror of
https://github.com/grafana/grafana.git
synced 2025-02-25 18:55:37 -06:00
Merge branch 'master' of github.com:grafana/grafana into develop
This commit is contained in:
commit
ad1d69861e
@ -22,6 +22,7 @@
|
||||
* **Timezone**: Time ranges like Today & Yesterday now work correctly when timezone setting is set to UTC [#8916](https://github.com/grafana/grafana/issues/8916), thx [@ctide](https://github.com/ctide)
|
||||
* **Prometheus**: Align $__interval with the step parameters. [#9226](https://github.com/grafana/grafana/pull/9226), thx [@alin-amana](https://github.com/alin-amana)
|
||||
* **Prometheus**: Autocomplete for label name and label value [#9208](https://github.com/grafana/grafana/pull/9208), thx [@mtanda](https://github.com/mtanda)
|
||||
* **Postgres**: New Postgres data source [#9209](https://github.com/grafana/grafana/pull/9209), thx [@svenklemm](https://github.com/svenklemm)
|
||||
|
||||
## Minor
|
||||
* **SMTP**: Make it possible to set specific EHLO for smtp client. [#9319](https://github.com/grafana/grafana/issues/9319)
|
||||
|
186
docs/sources/features/datasources/postgres.md
Normal file
186
docs/sources/features/datasources/postgres.md
Normal file
@ -0,0 +1,186 @@
|
||||
+++
|
||||
title = "Using PostgreSQL in Grafana"
|
||||
description = "Guide for using PostgreSQL in Grafana"
|
||||
keywords = ["grafana", "postgresql", "guide"]
|
||||
type = "docs"
|
||||
[menu.docs]
|
||||
name = "PostgreSQL"
|
||||
parent = "datasources"
|
||||
weight = 7
|
||||
+++
|
||||
|
||||
# Using PostgreSQL in Grafana
|
||||
|
||||
Grafana ships with a built-in PostgreSQL data source plugin that allows you to query and visualize data from a PostgreSQL compatible database.
|
||||
|
||||
## Adding the data source
|
||||
|
||||
1. Open the side menu by clicking the Grafana icon in the top header.
|
||||
2. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
|
||||
3. Click the `+ Add data source` button in the top header.
|
||||
4. Select *PostgreSQL* from the *Type* dropdown.
|
||||
|
||||
### Database User Permissions (Important!)
|
||||
|
||||
The database user you specify when you add the data source should only be granted SELECT permissions on
|
||||
the specified database & tables you want to query. Grafana does not validate that the query is safe. The query
|
||||
could include any SQL statement. For example, statements like `DELETE FROM user;` and `DROP TABLE user;` would be
|
||||
executed. To protect against this we **Highly** recommmend you create a specific postgresql user with restricted permissions.
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
CREATE USER grafanareader WITH PASSWORD 'password';
|
||||
GRANT USAGE ON SCHEMA schema TO grafanareader;
|
||||
GRANT SELECT ON schema.table TO grafanareader;
|
||||
```
|
||||
|
||||
Make sure the user does not get any unwanted privileges from the public role.
|
||||
|
||||
## Macros
|
||||
|
||||
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
|
||||
|
||||
Macro example | Description
|
||||
------------ | -------------
|
||||
*$__time(dateColumn)* | Will be replaced by an expression to rename the column to `time`. For example, *dateColumn as time*
|
||||
*$__timeSec(dateColumn)* | Will be replaced by an expression to rename the column to `time` and converting the value to unix timestamp. For example, *extract(epoch from dateColumn) as time*
|
||||
*$__timeFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name. For example, *dateColumn > to_timestamp(1494410783) AND dateColumn < to_timestamp(1494497183)*
|
||||
*$__timeFrom()* | Will be replaced by the start of the currently active time selection. For example, *to_timestamp(1494410783)*
|
||||
*$__timeTo()* | Will be replaced by the end of the currently active time selection. For example, *to_timestamp(1494497183)*
|
||||
*$__timeGroup(dateColumn,'5m')* | Will be replaced by an expression usable in GROUP BY clause. For example, *(extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int*
|
||||
*$__unixEpochFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name with times represented as unix timestamp. For example, *dateColumn > 1494410783 AND dateColumn < 1494497183*
|
||||
*$__unixEpochFrom()* | Will be replaced by the start of the currently active time selection as unix timestamp. For example, *1494410783*
|
||||
*$__unixEpochTo()* | Will be replaced by the end of the currently active time selection as unix timestamp. For example, *1494497183*
|
||||
|
||||
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
|
||||
|
||||
The query editor has a link named `Generated SQL` that shows up after a query as been executed, while in panel edit mode. Click on it and it will expand and show the raw interpolated SQL string that was executed.
|
||||
|
||||
## Table queries
|
||||
|
||||
If the `Format as` query option is set to `Table` then you can basically do any type of SQL query. The table panel will automatically show the results of whatever columns & rows your query returns.
|
||||
|
||||
Query editor with example query:
|
||||
|
||||

|
||||
|
||||
|
||||
The query:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
title as "Title",
|
||||
"user".login as "Created By",
|
||||
dashboard.created as "Created On"
|
||||
FROM dashboard
|
||||
INNER JOIN "user" on "user".id = dashboard.created_by
|
||||
WHERE $__timeFilter(dashboard.created)
|
||||
```
|
||||
|
||||
You can control the name of the Table panel columns by using regular `as ` SQL column selection syntax.
|
||||
|
||||
The resulting table panel:
|
||||
|
||||

|
||||
|
||||
### Time series queries
|
||||
|
||||
If you set `Format as` to `Time series`, for use in Graph panel for example, then the query must return a column named `time` that returns either a sql datetime or any numeric datatype representing unix epoch in seconds.
|
||||
Any column except `time` and `metric` is treated as a value column.
|
||||
You may return a column named `metric` that is used as metric name for the value column.
|
||||
|
||||
Example with `metric` column
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
min(time_date_time) as time,
|
||||
min(value_double),
|
||||
'min' as metric
|
||||
FROM test_data
|
||||
WHERE $__timeFilter(time_date_time)
|
||||
GROUP BY metric1, (extract(epoch from time_date_time)/extract(epoch from $__interval::interval))::int
|
||||
ORDER BY time asc
|
||||
```
|
||||
|
||||
Example with multiple columns:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
min(time_date_time) as time,
|
||||
min(value_double) as min_value,
|
||||
max(value_double) as max_value
|
||||
FROM test_data
|
||||
WHERE $__timeFilter(time_date_time)
|
||||
GROUP BY metric1, (extract(epoch from time_date_time)/extract(epoch from $__interval::interval))::int
|
||||
ORDER BY time asc
|
||||
```
|
||||
|
||||
## Templating
|
||||
|
||||
Instead of hard-coding things like server, application and sensor name in you metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns makes it easy to change the data being displayed in your dashboard.
|
||||
|
||||
Checkout the [Templating]({{< relref "reference/templating.md" >}}) documentation for an introduction to the templating feature and the different types of template variables.
|
||||
|
||||
### Query Variable
|
||||
|
||||
If you add a template variable of the type `Query`, you can write a PostgreSQL query that can
|
||||
return things like measurement names, key names or key values that are shown as a dropdown select box.
|
||||
|
||||
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting.
|
||||
|
||||
```sql
|
||||
SELECT hostname FROM host
|
||||
```
|
||||
|
||||
A query can return multiple columns and Grafana will automatically create a list from them. For example, the query below will return a list with values from `hostname` and `hostname2`.
|
||||
|
||||
```sql
|
||||
SELECT host.hostname, other_host.hostname2 FROM host JOIN other_host ON host.city = other_host.city
|
||||
```
|
||||
|
||||
Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allows you to have a friendly name as text and an id as the value. An example query with `hostname` as the text and `id` as the value:
|
||||
|
||||
```sql
|
||||
SELECT hostname AS __text, id AS __value FROM host
|
||||
```
|
||||
|
||||
You can also create nested variables. For example if you had another variable named `region`. Then you could have
|
||||
the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values):
|
||||
|
||||
```sql
|
||||
SELECT hostname FROM host WHERE region IN($region)
|
||||
```
|
||||
|
||||
### Using Variables in Queries
|
||||
|
||||
Template variables are quoted automatically so if it is a string value do not wrap them in quotes in where clauses. If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values.
|
||||
|
||||
There are two syntaxes:
|
||||
|
||||
`$<varname>` Example with a template variable named `hostname`:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
atimestamp as time,
|
||||
aint as value
|
||||
FROM table
|
||||
WHERE $__timeFilter(atimestamp) and hostname in($hostname)
|
||||
ORDER BY atimestamp ASC
|
||||
```
|
||||
|
||||
`[[varname]]` Example with a template variable named `hostname`:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
atimestamp as time,
|
||||
aint as value
|
||||
FROM table
|
||||
WHERE $__timeFilter(atimestamp) and hostname in([[hostname]])
|
||||
ORDER BY atimestamp ASC
|
||||
```
|
||||
|
||||
## Alerting
|
||||
|
||||
Time series queries should work in alerting conditions. Table formatted queries is not yet supported in alert rule
|
||||
conditions.
|
@ -11,11 +11,41 @@ weight = 2
|
||||
# Annotations
|
||||
|
||||
Annotations provide a way to mark points on the graph with rich events. When you hover over an annotation
|
||||
you can get title, tags, and text information for the event.
|
||||
you can get event description and event tags. The text field can include links to other systems with more detail.
|
||||
|
||||

|
||||
|
||||
## Queries
|
||||
## Native annotations
|
||||
|
||||
Grafana v4.6+ comes with a native annotation store and the ability to add annotation events directly from the graph panel or via the [HTTP API]({{< relref "http_api/annotations.md" >}})
|
||||
|
||||
## Adding annotations
|
||||
|
||||
by holding down CTRL/CMD + mouse click. Add tags to the annotation will make it searchable from other dashboards.
|
||||
|
||||
<!-- adding annoation gif animation -->
|
||||
|
||||
### Adding regions events
|
||||
|
||||
You can also hold down CTRL/CMD and select region to create a region annotation.
|
||||
|
||||
<!-- region image/gif animation -->
|
||||
|
||||
### Built in query
|
||||
|
||||
After you added an an annotation they will be still be visible. This is due to the built in annotation query that exists on all dashboards. This annotation query will
|
||||
fetch all annotation events that originate from the current dashboard and show them on the panel where they where created. This includes alert state history annotations. You can
|
||||
stop annotations from being fetched & drawn by opening the **Annotations** settings (via Dashboard cogs menu) and modifying the query named `Annotations & Alerts (Built-in)`.
|
||||
|
||||
<!-- image of built in query -->
|
||||
|
||||
### Query by tag
|
||||
|
||||
You can create new annotation queries that fetch annotations from the native annotation store via the `-- Grafana --` data source and by setting *Filter by* to `Tags`. Specify at least
|
||||
one tag. For example create an annotation query name `outages` and specify a tag named `outage`. This query will show all annotations you create (from any dashboard or via API) that
|
||||
have the `outage` tag.
|
||||
|
||||
## Querying other data sources
|
||||
|
||||
Annotation events are fetched via annotation queries. To add a new annotation query to a dashboard
|
||||
open the dashboard settings menu, then select `Annotations`. This will open the dashboard annotations
|
||||
|
@ -26,7 +26,7 @@ import (
|
||||
_ "github.com/grafana/grafana/pkg/tsdb/influxdb"
|
||||
_ "github.com/grafana/grafana/pkg/tsdb/mysql"
|
||||
_ "github.com/grafana/grafana/pkg/tsdb/opentsdb"
|
||||
|
||||
_ "github.com/grafana/grafana/pkg/tsdb/postgres"
|
||||
_ "github.com/grafana/grafana/pkg/tsdb/prometheus"
|
||||
_ "github.com/grafana/grafana/pkg/tsdb/testdata"
|
||||
)
|
||||
|
@ -73,11 +73,12 @@ type GetDashboardSnapshotQuery struct {
|
||||
}
|
||||
|
||||
type DashboardSnapshots []*DashboardSnapshot
|
||||
type DashboardSnapshotsList []*DashboardSnapshotDTO
|
||||
|
||||
type GetDashboardSnapshotsQuery struct {
|
||||
Name string
|
||||
Limit int
|
||||
OrgId int64
|
||||
|
||||
Result DashboardSnapshots
|
||||
Result DashboardSnapshotsList
|
||||
}
|
||||
|
@ -17,6 +17,7 @@ const (
|
||||
DS_CLOUDWATCH = "cloudwatch"
|
||||
DS_KAIROSDB = "kairosdb"
|
||||
DS_PROMETHEUS = "prometheus"
|
||||
DS_POSTGRES = "postgres"
|
||||
DS_ACCESS_DIRECT = "direct"
|
||||
DS_ACCESS_PROXY = "proxy"
|
||||
)
|
||||
@ -62,6 +63,7 @@ var knownDatasourcePlugins map[string]bool = map[string]bool{
|
||||
DS_CLOUDWATCH: true,
|
||||
DS_PROMETHEUS: true,
|
||||
DS_OPENTSDB: true,
|
||||
DS_POSTGRES: true,
|
||||
"opennms": true,
|
||||
"druid": true,
|
||||
"dalmatinerdb": true,
|
||||
|
@ -86,9 +86,10 @@ func GetDashboardSnapshot(query *m.GetDashboardSnapshotQuery) error {
|
||||
}
|
||||
|
||||
func SearchDashboardSnapshots(query *m.GetDashboardSnapshotsQuery) error {
|
||||
var snapshots = make(m.DashboardSnapshots, 0)
|
||||
var snapshots = make(m.DashboardSnapshotsList, 0)
|
||||
|
||||
sess := x.Limit(query.Limit)
|
||||
sess.Table("dashboard_snapshot")
|
||||
|
||||
if query.Name != "" {
|
||||
sess.Where("name LIKE ?", query.Name)
|
||||
|
@ -96,6 +96,7 @@ func CreateUser(cmd *m.CreateUserCommand) error {
|
||||
EmailVerified: cmd.EmailVerified,
|
||||
Created: time.Now(),
|
||||
Updated: time.Now(),
|
||||
LastSeenAt: time.Now().AddDate(-10, 0, 0),
|
||||
}
|
||||
|
||||
if len(cmd.Password) > 0 {
|
||||
|
@ -11,26 +11,21 @@ import (
|
||||
const rsIdentifier = `([_a-zA-Z0-9]+)`
|
||||
const sExpr = `\$` + rsIdentifier + `\(([^\)]*)\)`
|
||||
|
||||
type SqlMacroEngine interface {
|
||||
Interpolate(sql string) (string, error)
|
||||
}
|
||||
|
||||
type MySqlMacroEngine struct {
|
||||
TimeRange *tsdb.TimeRange
|
||||
}
|
||||
|
||||
func NewMysqlMacroEngine(timeRange *tsdb.TimeRange) SqlMacroEngine {
|
||||
return &MySqlMacroEngine{
|
||||
TimeRange: timeRange,
|
||||
}
|
||||
func NewMysqlMacroEngine() tsdb.SqlMacroEngine {
|
||||
return &MySqlMacroEngine{}
|
||||
}
|
||||
|
||||
func (m *MySqlMacroEngine) Interpolate(sql string) (string, error) {
|
||||
func (m *MySqlMacroEngine) Interpolate(timeRange *tsdb.TimeRange, sql string) (string, error) {
|
||||
m.TimeRange = timeRange
|
||||
rExp, _ := regexp.Compile(sExpr)
|
||||
var macroError error
|
||||
|
||||
sql = ReplaceAllStringSubmatchFunc(rExp, sql, func(groups []string) string {
|
||||
res, err := m.EvaluateMacro(groups[1], groups[2:])
|
||||
sql = replaceAllStringSubmatchFunc(rExp, sql, func(groups []string) string {
|
||||
res, err := m.evaluateMacro(groups[1], groups[2:])
|
||||
if err != nil && macroError == nil {
|
||||
macroError = err
|
||||
return "macro_error()"
|
||||
@ -45,7 +40,7 @@ func (m *MySqlMacroEngine) Interpolate(sql string) (string, error) {
|
||||
return sql, nil
|
||||
}
|
||||
|
||||
func ReplaceAllStringSubmatchFunc(re *regexp.Regexp, str string, repl func([]string) string) string {
|
||||
func replaceAllStringSubmatchFunc(re *regexp.Regexp, str string, repl func([]string) string) string {
|
||||
result := ""
|
||||
lastIndex := 0
|
||||
|
||||
@ -62,7 +57,7 @@ func ReplaceAllStringSubmatchFunc(re *regexp.Regexp, str string, repl func([]str
|
||||
return result + str[lastIndex:]
|
||||
}
|
||||
|
||||
func (m *MySqlMacroEngine) EvaluateMacro(name string, args []string) (string, error) {
|
||||
func (m *MySqlMacroEngine) evaluateMacro(name string, args []string) (string, error) {
|
||||
switch name {
|
||||
case "__time":
|
||||
if len(args) == 0 {
|
||||
|
@ -9,86 +9,60 @@ import (
|
||||
|
||||
func TestMacroEngine(t *testing.T) {
|
||||
Convey("MacroEngine", t, func() {
|
||||
engine := &MySqlMacroEngine{}
|
||||
timeRange := &tsdb.TimeRange{From: "5m", To: "now"}
|
||||
|
||||
Convey("interpolate __time function", func() {
|
||||
engine := &MySqlMacroEngine{}
|
||||
|
||||
sql, err := engine.Interpolate("select $__time(time_column)")
|
||||
sql, err := engine.Interpolate(nil, "select $__time(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select UNIX_TIMESTAMP(time_column) as time_sec")
|
||||
})
|
||||
|
||||
Convey("interpolate __time function wrapped in aggregation", func() {
|
||||
engine := &MySqlMacroEngine{}
|
||||
|
||||
sql, err := engine.Interpolate("select min($__time(time_column))")
|
||||
sql, err := engine.Interpolate(nil, "select min($__time(time_column))")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select min(UNIX_TIMESTAMP(time_column) as time_sec)")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeFilter function", func() {
|
||||
engine := &MySqlMacroEngine{
|
||||
TimeRange: &tsdb.TimeRange{From: "5m", To: "now"},
|
||||
}
|
||||
|
||||
sql, err := engine.Interpolate("WHERE $__timeFilter(time_column)")
|
||||
sql, err := engine.Interpolate(timeRange, "WHERE $__timeFilter(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "WHERE time_column >= FROM_UNIXTIME(18446744066914186738) AND time_column <= FROM_UNIXTIME(18446744066914187038)")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeFrom function", func() {
|
||||
engine := &MySqlMacroEngine{
|
||||
TimeRange: &tsdb.TimeRange{From: "5m", To: "now"},
|
||||
}
|
||||
|
||||
sql, err := engine.Interpolate("select $__timeFrom(time_column)")
|
||||
sql, err := engine.Interpolate(timeRange, "select $__timeFrom(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select FROM_UNIXTIME(18446744066914186738)")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeTo function", func() {
|
||||
engine := &MySqlMacroEngine{
|
||||
TimeRange: &tsdb.TimeRange{From: "5m", To: "now"},
|
||||
}
|
||||
|
||||
sql, err := engine.Interpolate("select $__timeTo(time_column)")
|
||||
sql, err := engine.Interpolate(timeRange, "select $__timeTo(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select FROM_UNIXTIME(18446744066914187038)")
|
||||
})
|
||||
|
||||
Convey("interpolate __unixEpochFilter function", func() {
|
||||
engine := &MySqlMacroEngine{
|
||||
TimeRange: &tsdb.TimeRange{From: "5m", To: "now"},
|
||||
}
|
||||
|
||||
sql, err := engine.Interpolate("select $__unixEpochFilter(18446744066914186738)")
|
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochFilter(18446744066914186738)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914186738 >= 18446744066914186738 AND 18446744066914186738 <= 18446744066914187038")
|
||||
})
|
||||
|
||||
Convey("interpolate __unixEpochFrom function", func() {
|
||||
engine := &MySqlMacroEngine{
|
||||
TimeRange: &tsdb.TimeRange{From: "5m", To: "now"},
|
||||
}
|
||||
|
||||
sql, err := engine.Interpolate("select $__unixEpochFrom()")
|
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochFrom()")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914186738")
|
||||
})
|
||||
|
||||
Convey("interpolate __unixEpochTo function", func() {
|
||||
engine := &MySqlMacroEngine{
|
||||
TimeRange: &tsdb.TimeRange{From: "5m", To: "now"},
|
||||
}
|
||||
|
||||
sql, err := engine.Interpolate("select $__unixEpochTo()")
|
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochTo()")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914187038")
|
||||
|
@ -6,142 +6,57 @@ import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"sync"
|
||||
|
||||
"time"
|
||||
|
||||
"github.com/go-sql-driver/mysql"
|
||||
"github.com/go-xorm/core"
|
||||
"github.com/go-xorm/xorm"
|
||||
"github.com/grafana/grafana/pkg/components/null"
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/log"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/tsdb"
|
||||
)
|
||||
|
||||
type MysqlExecutor struct {
|
||||
engine *xorm.Engine
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
type engineCacheType struct {
|
||||
cache map[int64]*xorm.Engine
|
||||
versions map[int64]int
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
var engineCache = engineCacheType{
|
||||
cache: make(map[int64]*xorm.Engine),
|
||||
versions: make(map[int64]int),
|
||||
type MysqlQueryEndpoint struct {
|
||||
sqlEngine tsdb.SqlEngine
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func init() {
|
||||
tsdb.RegisterTsdbQueryEndpoint("mysql", NewMysqlExecutor)
|
||||
tsdb.RegisterTsdbQueryEndpoint("mysql", NewMysqlQueryEndpoint)
|
||||
}
|
||||
|
||||
func NewMysqlExecutor(datasource *models.DataSource) (tsdb.TsdbQueryEndpoint, error) {
|
||||
executor := &MysqlExecutor{
|
||||
func NewMysqlQueryEndpoint(datasource *models.DataSource) (tsdb.TsdbQueryEndpoint, error) {
|
||||
endpoint := &MysqlQueryEndpoint{
|
||||
log: log.New("tsdb.mysql"),
|
||||
}
|
||||
|
||||
err := executor.initEngine(datasource)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return executor, nil
|
||||
}
|
||||
|
||||
func (e *MysqlExecutor) initEngine(dsInfo *models.DataSource) error {
|
||||
engineCache.Lock()
|
||||
defer engineCache.Unlock()
|
||||
|
||||
if engine, present := engineCache.cache[dsInfo.Id]; present {
|
||||
if version, _ := engineCache.versions[dsInfo.Id]; version == dsInfo.Version {
|
||||
e.engine = engine
|
||||
return nil
|
||||
}
|
||||
endpoint.sqlEngine = &tsdb.DefaultSqlEngine{
|
||||
MacroEngine: NewMysqlMacroEngine(),
|
||||
}
|
||||
|
||||
cnnstr := fmt.Sprintf("%s:%s@%s(%s)/%s?collation=utf8mb4_unicode_ci&parseTime=true&loc=UTC",
|
||||
dsInfo.User,
|
||||
dsInfo.Password,
|
||||
datasource.User,
|
||||
datasource.Password,
|
||||
"tcp",
|
||||
dsInfo.Url,
|
||||
dsInfo.Database)
|
||||
datasource.Url,
|
||||
datasource.Database,
|
||||
)
|
||||
endpoint.log.Debug("getEngine", "connection", cnnstr)
|
||||
|
||||
e.log.Debug("getEngine", "connection", cnnstr)
|
||||
|
||||
engine, err := xorm.NewEngine("mysql", cnnstr)
|
||||
engine.SetMaxOpenConns(10)
|
||||
engine.SetMaxIdleConns(10)
|
||||
if err != nil {
|
||||
return err
|
||||
if err := endpoint.sqlEngine.InitEngine("mysql", datasource, cnnstr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
engineCache.cache[dsInfo.Id] = engine
|
||||
e.engine = engine
|
||||
return nil
|
||||
return endpoint, nil
|
||||
}
|
||||
|
||||
func (e *MysqlExecutor) Query(ctx context.Context, dsInfo *models.DataSource, tsdbQuery *tsdb.TsdbQuery) (*tsdb.Response, error) {
|
||||
result := &tsdb.Response{
|
||||
Results: make(map[string]*tsdb.QueryResult),
|
||||
}
|
||||
|
||||
macroEngine := NewMysqlMacroEngine(tsdbQuery.TimeRange)
|
||||
session := e.engine.NewSession()
|
||||
defer session.Close()
|
||||
db := session.DB()
|
||||
|
||||
for _, query := range tsdbQuery.Queries {
|
||||
rawSql := query.Model.Get("rawSql").MustString()
|
||||
if rawSql == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
queryResult := &tsdb.QueryResult{Meta: simplejson.New(), RefId: query.RefId}
|
||||
result.Results[query.RefId] = queryResult
|
||||
|
||||
rawSql, err := macroEngine.Interpolate(rawSql)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
|
||||
queryResult.Meta.Set("sql", rawSql)
|
||||
|
||||
rows, err := db.Query(rawSql)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
|
||||
defer rows.Close()
|
||||
|
||||
format := query.Model.Get("format").MustString("time_series")
|
||||
|
||||
switch format {
|
||||
case "time_series":
|
||||
err := e.TransformToTimeSeries(query, rows, queryResult)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
case "table":
|
||||
err := e.TransformToTable(query, rows, queryResult)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
// Query is the main function for the MysqlExecutor
|
||||
func (e *MysqlQueryEndpoint) Query(ctx context.Context, dsInfo *models.DataSource, tsdbQuery *tsdb.TsdbQuery) (*tsdb.Response, error) {
|
||||
return e.sqlEngine.Query(ctx, dsInfo, tsdbQuery, e.transformToTimeSeries, e.transformToTable)
|
||||
}
|
||||
|
||||
func (e MysqlExecutor) TransformToTable(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error {
|
||||
func (e MysqlQueryEndpoint) transformToTable(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error {
|
||||
columnNames, err := rows.Columns()
|
||||
columnCount := len(columnNames)
|
||||
|
||||
@ -166,7 +81,7 @@ func (e MysqlExecutor) TransformToTable(query *tsdb.Query, rows *core.Rows, resu
|
||||
rowLimit := 1000000
|
||||
rowCount := 0
|
||||
|
||||
for ; rows.Next(); rowCount += 1 {
|
||||
for ; rows.Next(); rowCount++ {
|
||||
if rowCount > rowLimit {
|
||||
return fmt.Errorf("MySQL query row limit exceeded, limit %d", rowLimit)
|
||||
}
|
||||
@ -184,7 +99,7 @@ func (e MysqlExecutor) TransformToTable(query *tsdb.Query, rows *core.Rows, resu
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e MysqlExecutor) getTypedRowData(types []*sql.ColumnType, rows *core.Rows) (tsdb.RowValues, error) {
|
||||
func (e MysqlQueryEndpoint) getTypedRowData(types []*sql.ColumnType, rows *core.Rows) (tsdb.RowValues, error) {
|
||||
values := make([]interface{}, len(types))
|
||||
|
||||
for i, stype := range types {
|
||||
@ -248,7 +163,7 @@ func (e MysqlExecutor) getTypedRowData(types []*sql.ColumnType, rows *core.Rows)
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func (e MysqlExecutor) TransformToTimeSeries(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error {
|
||||
func (e MysqlQueryEndpoint) transformToTimeSeries(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error {
|
||||
pointsBySeries := make(map[string]*tsdb.TimeSeries)
|
||||
seriesByQueryOrder := list.New()
|
||||
columnNames, err := rows.Columns()
|
||||
@ -261,7 +176,7 @@ func (e MysqlExecutor) TransformToTimeSeries(query *tsdb.Query, rows *core.Rows,
|
||||
rowLimit := 1000000
|
||||
rowCount := 0
|
||||
|
||||
for ; rows.Next(); rowCount += 1 {
|
||||
for ; rows.Next(); rowCount++ {
|
||||
if rowCount > rowLimit {
|
||||
return fmt.Errorf("MySQL query row limit exceeded, limit %d", rowLimit)
|
||||
}
|
||||
|
@ -18,14 +18,16 @@ func TestMySQL(t *testing.T) {
|
||||
SkipConvey("MySQL", t, func() {
|
||||
x := InitMySQLTestDB(t)
|
||||
|
||||
executor := &MysqlExecutor{
|
||||
engine: x,
|
||||
log: log.New("tsdb.mysql"),
|
||||
endpoint := &MysqlQueryEndpoint{
|
||||
sqlEngine: &tsdb.DefaultSqlEngine{
|
||||
MacroEngine: NewMysqlMacroEngine(),
|
||||
XormEngine: x,
|
||||
},
|
||||
log: log.New("tsdb.mysql"),
|
||||
}
|
||||
|
||||
sess := x.NewSession()
|
||||
defer sess.Close()
|
||||
db := sess.DB()
|
||||
|
||||
sql := "CREATE TABLE `mysql_types` ("
|
||||
sql += "`atinyint` tinyint(1),"
|
||||
@ -70,14 +72,23 @@ func TestMySQL(t *testing.T) {
|
||||
_, err = sess.Exec(sql)
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
Convey("TransformToTable should map MySQL column types to Go types", func() {
|
||||
rows, err := db.Query("SELECT * FROM mysql_types")
|
||||
defer rows.Close()
|
||||
Convey("Query with Table format should map MySQL column types to Go types", func() {
|
||||
query := &tsdb.TsdbQuery{
|
||||
Queries: []*tsdb.Query{
|
||||
{
|
||||
Model: simplejson.NewFromAny(map[string]interface{}{
|
||||
"rawSql": "SELECT * FROM mysql_types",
|
||||
"format": "table",
|
||||
}),
|
||||
RefId: "A",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
resp, err := endpoint.Query(nil, nil, query)
|
||||
queryResult := resp.Results["A"]
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
queryResult := &tsdb.QueryResult{Meta: simplejson.New()}
|
||||
err = executor.TransformToTable(nil, rows, queryResult)
|
||||
So(err, ShouldBeNil)
|
||||
column := queryResult.Tables[0].Rows[0]
|
||||
So(*column[0].(*int8), ShouldEqual, 1)
|
||||
So(*column[1].(*string), ShouldEqual, "abc")
|
||||
|
99
pkg/tsdb/postgres/macros.go
Normal file
99
pkg/tsdb/postgres/macros.go
Normal file
@ -0,0 +1,99 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/grafana/grafana/pkg/tsdb"
|
||||
)
|
||||
|
||||
//const rsString = `(?:"([^"]*)")`;
|
||||
const rsIdentifier = `([_a-zA-Z0-9]+)`
|
||||
const sExpr = `\$` + rsIdentifier + `\(([^\)]*)\)`
|
||||
|
||||
type PostgresMacroEngine struct {
|
||||
TimeRange *tsdb.TimeRange
|
||||
}
|
||||
|
||||
func NewPostgresMacroEngine() tsdb.SqlMacroEngine {
|
||||
return &PostgresMacroEngine{}
|
||||
}
|
||||
|
||||
func (m *PostgresMacroEngine) Interpolate(timeRange *tsdb.TimeRange, sql string) (string, error) {
|
||||
m.TimeRange = timeRange
|
||||
rExp, _ := regexp.Compile(sExpr)
|
||||
var macroError error
|
||||
|
||||
sql = replaceAllStringSubmatchFunc(rExp, sql, func(groups []string) string {
|
||||
res, err := m.evaluateMacro(groups[1], strings.Split(groups[2], ","))
|
||||
if err != nil && macroError == nil {
|
||||
macroError = err
|
||||
return "macro_error()"
|
||||
}
|
||||
return res
|
||||
})
|
||||
|
||||
if macroError != nil {
|
||||
return "", macroError
|
||||
}
|
||||
|
||||
return sql, nil
|
||||
}
|
||||
|
||||
func replaceAllStringSubmatchFunc(re *regexp.Regexp, str string, repl func([]string) string) string {
|
||||
result := ""
|
||||
lastIndex := 0
|
||||
|
||||
for _, v := range re.FindAllSubmatchIndex([]byte(str), -1) {
|
||||
groups := []string{}
|
||||
for i := 0; i < len(v); i += 2 {
|
||||
groups = append(groups, str[v[i]:v[i+1]])
|
||||
}
|
||||
|
||||
result += str[lastIndex:v[0]] + repl(groups)
|
||||
lastIndex = v[1]
|
||||
}
|
||||
|
||||
return result + str[lastIndex:]
|
||||
}
|
||||
|
||||
func (m *PostgresMacroEngine) evaluateMacro(name string, args []string) (string, error) {
|
||||
switch name {
|
||||
case "__time":
|
||||
if len(args) == 0 {
|
||||
return "", fmt.Errorf("missing time column argument for macro %v", name)
|
||||
}
|
||||
return fmt.Sprintf("%s AS \"time\"", args[0]), nil
|
||||
case "__timeEpoch":
|
||||
if len(args) == 0 {
|
||||
return "", fmt.Errorf("missing time column argument for macro %v", name)
|
||||
}
|
||||
return fmt.Sprintf("extract(epoch from %s) as \"time\"", args[0]), nil
|
||||
case "__timeFilter":
|
||||
if len(args) == 0 {
|
||||
return "", fmt.Errorf("missing time column argument for macro %v", name)
|
||||
}
|
||||
return fmt.Sprintf("%s >= to_timestamp(%d) AND %s <= to_timestamp(%d)", args[0], uint64(m.TimeRange.GetFromAsMsEpoch()/1000), args[0], uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil
|
||||
case "__timeFrom":
|
||||
return fmt.Sprintf("to_timestamp(%d)", uint64(m.TimeRange.GetFromAsMsEpoch()/1000)), nil
|
||||
case "__timeTo":
|
||||
return fmt.Sprintf("to_timestamp(%d)", uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil
|
||||
case "__timeGroup":
|
||||
if len(args) < 2 {
|
||||
return "", fmt.Errorf("macro %v needs time column and interval", name)
|
||||
}
|
||||
return fmt.Sprintf("(extract(epoch from \"%s\")/extract(epoch from %s::interval))::int", args[0], args[1]), nil
|
||||
case "__unixEpochFilter":
|
||||
if len(args) == 0 {
|
||||
return "", fmt.Errorf("missing time column argument for macro %v", name)
|
||||
}
|
||||
return fmt.Sprintf("%s >= %d AND %s <= %d", args[0], uint64(m.TimeRange.GetFromAsMsEpoch()/1000), args[0], uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil
|
||||
case "__unixEpochFrom":
|
||||
return fmt.Sprintf("%d", uint64(m.TimeRange.GetFromAsMsEpoch()/1000)), nil
|
||||
case "__unixEpochTo":
|
||||
return fmt.Sprintf("%d", uint64(m.TimeRange.GetToAsMsEpoch()/1000)), nil
|
||||
default:
|
||||
return "", fmt.Errorf("Unknown macro %v", name)
|
||||
}
|
||||
}
|
80
pkg/tsdb/postgres/macros_test.go
Normal file
80
pkg/tsdb/postgres/macros_test.go
Normal file
@ -0,0 +1,80 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana/pkg/tsdb"
|
||||
. "github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
func TestMacroEngine(t *testing.T) {
|
||||
Convey("MacroEngine", t, func() {
|
||||
engine := &PostgresMacroEngine{}
|
||||
timeRange := &tsdb.TimeRange{From: "5m", To: "now"}
|
||||
|
||||
Convey("interpolate __time function", func() {
|
||||
sql, err := engine.Interpolate(nil, "select $__time(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select time_column AS \"time\"")
|
||||
})
|
||||
|
||||
Convey("interpolate __time function wrapped in aggregation", func() {
|
||||
sql, err := engine.Interpolate(nil, "select min($__time(time_column))")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select min(time_column AS \"time\")")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeFilter function", func() {
|
||||
sql, err := engine.Interpolate(timeRange, "WHERE $__timeFilter(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "WHERE time_column >= to_timestamp(18446744066914186738) AND time_column <= to_timestamp(18446744066914187038)")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeFrom function", func() {
|
||||
sql, err := engine.Interpolate(timeRange, "select $__timeFrom(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select to_timestamp(18446744066914186738)")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeGroup function", func() {
|
||||
|
||||
sql, err := engine.Interpolate(timeRange, "GROUP BY $__timeGroup(time_column,'5m')")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "GROUP BY (extract(epoch from \"time_column\")/extract(epoch from '5m'::interval))::int")
|
||||
})
|
||||
|
||||
Convey("interpolate __timeTo function", func() {
|
||||
sql, err := engine.Interpolate(timeRange, "select $__timeTo(time_column)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select to_timestamp(18446744066914187038)")
|
||||
})
|
||||
|
||||
Convey("interpolate __unixEpochFilter function", func() {
|
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochFilter(18446744066914186738)")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914186738 >= 18446744066914186738 AND 18446744066914186738 <= 18446744066914187038")
|
||||
})
|
||||
|
||||
Convey("interpolate __unixEpochFrom function", func() {
|
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochFrom()")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914186738")
|
||||
})
|
||||
|
||||
Convey("interpolate __unixEpochTo function", func() {
|
||||
sql, err := engine.Interpolate(timeRange, "select $__unixEpochTo()")
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
So(sql, ShouldEqual, "select 18446744066914187038")
|
||||
})
|
||||
|
||||
})
|
||||
}
|
245
pkg/tsdb/postgres/postgres.go
Normal file
245
pkg/tsdb/postgres/postgres.go
Normal file
@ -0,0 +1,245 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"container/list"
|
||||
"context"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/go-xorm/core"
|
||||
"github.com/grafana/grafana/pkg/components/null"
|
||||
"github.com/grafana/grafana/pkg/log"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
"github.com/grafana/grafana/pkg/tsdb"
|
||||
)
|
||||
|
||||
type PostgresQueryEndpoint struct {
|
||||
sqlEngine tsdb.SqlEngine
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
func init() {
|
||||
tsdb.RegisterTsdbQueryEndpoint("postgres", NewPostgresQueryEndpoint)
|
||||
}
|
||||
|
||||
func NewPostgresQueryEndpoint(datasource *models.DataSource) (tsdb.TsdbQueryEndpoint, error) {
|
||||
endpoint := &PostgresQueryEndpoint{
|
||||
log: log.New("tsdb.postgres"),
|
||||
}
|
||||
|
||||
endpoint.sqlEngine = &tsdb.DefaultSqlEngine{
|
||||
MacroEngine: NewPostgresMacroEngine(),
|
||||
}
|
||||
|
||||
cnnstr := generateConnectionString(datasource)
|
||||
endpoint.log.Debug("getEngine", "connection", cnnstr)
|
||||
|
||||
if err := endpoint.sqlEngine.InitEngine("postgres", datasource, cnnstr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return endpoint, nil
|
||||
}
|
||||
|
||||
func generateConnectionString(datasource *models.DataSource) string {
|
||||
password := ""
|
||||
for key, value := range datasource.SecureJsonData.Decrypt() {
|
||||
if key == "password" {
|
||||
password = value
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
sslmode := datasource.JsonData.Get("sslmode").MustString("require")
|
||||
return fmt.Sprintf("postgres://%s:%s@%s/%s?sslmode=%s", datasource.User, password, datasource.Url, datasource.Database, sslmode)
|
||||
}
|
||||
|
||||
func (e *PostgresQueryEndpoint) Query(ctx context.Context, dsInfo *models.DataSource, tsdbQuery *tsdb.TsdbQuery) (*tsdb.Response, error) {
|
||||
return e.sqlEngine.Query(ctx, dsInfo, tsdbQuery, e.transformToTimeSeries, e.transformToTable)
|
||||
}
|
||||
|
||||
func (e PostgresQueryEndpoint) transformToTable(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error {
|
||||
|
||||
columnNames, err := rows.Columns()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
table := &tsdb.Table{
|
||||
Columns: make([]tsdb.TableColumn, len(columnNames)),
|
||||
Rows: make([]tsdb.RowValues, 0),
|
||||
}
|
||||
|
||||
for i, name := range columnNames {
|
||||
table.Columns[i].Text = name
|
||||
}
|
||||
|
||||
rowLimit := 1000000
|
||||
rowCount := 0
|
||||
|
||||
for ; rows.Next(); rowCount++ {
|
||||
if rowCount > rowLimit {
|
||||
return fmt.Errorf("PostgreSQL query row limit exceeded, limit %d", rowLimit)
|
||||
}
|
||||
|
||||
values, err := e.getTypedRowData(rows)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
table.Rows = append(table.Rows, values)
|
||||
}
|
||||
|
||||
result.Tables = append(result.Tables, table)
|
||||
result.Meta.Set("rowCount", rowCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e PostgresQueryEndpoint) getTypedRowData(rows *core.Rows) (tsdb.RowValues, error) {
|
||||
|
||||
types, err := rows.ColumnTypes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
values := make([]interface{}, len(types))
|
||||
valuePtrs := make([]interface{}, len(types))
|
||||
|
||||
for i := 0; i < len(types); i++ {
|
||||
valuePtrs[i] = &values[i]
|
||||
}
|
||||
|
||||
if err := rows.Scan(valuePtrs...); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// convert types not handled by lib/pq
|
||||
// unhandled types are returned as []byte
|
||||
for i := 0; i < len(types); i++ {
|
||||
if value, ok := values[i].([]byte); ok == true {
|
||||
switch types[i].DatabaseTypeName() {
|
||||
case "NUMERIC":
|
||||
if v, err := strconv.ParseFloat(string(value), 64); err == nil {
|
||||
values[i] = v
|
||||
} else {
|
||||
e.log.Debug("Rows", "Error converting numeric to float", value)
|
||||
}
|
||||
case "UNKNOWN", "CIDR", "INET", "MACADDR":
|
||||
// char literals have type UNKNOWN
|
||||
values[i] = string(value)
|
||||
default:
|
||||
e.log.Debug("Rows", "Unknown database type", types[i].DatabaseTypeName(), "value", value)
|
||||
values[i] = string(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func (e PostgresQueryEndpoint) transformToTimeSeries(query *tsdb.Query, rows *core.Rows, result *tsdb.QueryResult) error {
|
||||
pointsBySeries := make(map[string]*tsdb.TimeSeries)
|
||||
seriesByQueryOrder := list.New()
|
||||
columnNames, err := rows.Columns()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rowLimit := 1000000
|
||||
rowCount := 0
|
||||
timeIndex := -1
|
||||
metricIndex := -1
|
||||
|
||||
// check columns of resultset
|
||||
for i, col := range columnNames {
|
||||
switch col {
|
||||
case "time":
|
||||
timeIndex = i
|
||||
case "metric":
|
||||
metricIndex = i
|
||||
}
|
||||
}
|
||||
|
||||
if timeIndex == -1 {
|
||||
return fmt.Errorf("Found no column named time")
|
||||
}
|
||||
|
||||
for rows.Next() {
|
||||
var timestamp float64
|
||||
var value null.Float
|
||||
var metric string
|
||||
|
||||
if rowCount > rowLimit {
|
||||
return fmt.Errorf("PostgreSQL query row limit exceeded, limit %d", rowLimit)
|
||||
}
|
||||
|
||||
values, err := e.getTypedRowData(rows)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch columnValue := values[timeIndex].(type) {
|
||||
case int64:
|
||||
timestamp = float64(columnValue * 1000)
|
||||
case float64:
|
||||
timestamp = columnValue * 1000
|
||||
case time.Time:
|
||||
timestamp = float64(columnValue.Unix() * 1000)
|
||||
default:
|
||||
return fmt.Errorf("Invalid type for column time, must be of type timestamp or unix timestamp")
|
||||
}
|
||||
|
||||
if metricIndex >= 0 {
|
||||
if columnValue, ok := values[metricIndex].(string); ok == true {
|
||||
metric = columnValue
|
||||
} else {
|
||||
return fmt.Errorf("Column metric must be of type char,varchar or text")
|
||||
}
|
||||
}
|
||||
|
||||
for i, col := range columnNames {
|
||||
if i == timeIndex || i == metricIndex {
|
||||
continue
|
||||
}
|
||||
|
||||
switch columnValue := values[i].(type) {
|
||||
case int64:
|
||||
value = null.FloatFrom(float64(columnValue))
|
||||
case float64:
|
||||
value = null.FloatFrom(columnValue)
|
||||
case nil:
|
||||
value.Valid = false
|
||||
default:
|
||||
return fmt.Errorf("Value column must have numeric datatype, column: %s type: %T value: %v", col, columnValue, columnValue)
|
||||
}
|
||||
if metricIndex == -1 {
|
||||
metric = col
|
||||
}
|
||||
e.appendTimePoint(pointsBySeries, seriesByQueryOrder, metric, timestamp, value)
|
||||
rowCount++
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
for elem := seriesByQueryOrder.Front(); elem != nil; elem = elem.Next() {
|
||||
key := elem.Value.(string)
|
||||
result.Series = append(result.Series, pointsBySeries[key])
|
||||
}
|
||||
|
||||
result.Meta.Set("rowCount", rowCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e PostgresQueryEndpoint) appendTimePoint(pointsBySeries map[string]*tsdb.TimeSeries, seriesByQueryOrder *list.List, metric string, timestamp float64, value null.Float) {
|
||||
if series, exist := pointsBySeries[metric]; exist {
|
||||
series.Points = append(series.Points, tsdb.TimePoint{value, null.FloatFrom(timestamp)})
|
||||
} else {
|
||||
series := &tsdb.TimeSeries{Name: metric}
|
||||
series.Points = append(series.Points, tsdb.TimePoint{value, null.FloatFrom(timestamp)})
|
||||
pointsBySeries[metric] = series
|
||||
seriesByQueryOrder.PushBack(metric)
|
||||
}
|
||||
e.log.Debug("Rows", "metric", metric, "time", timestamp, "value", value)
|
||||
}
|
125
pkg/tsdb/postgres/postgres_test.go
Normal file
125
pkg/tsdb/postgres/postgres_test.go
Normal file
@ -0,0 +1,125 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/go-xorm/xorm"
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/log"
|
||||
"github.com/grafana/grafana/pkg/services/sqlstore/sqlutil"
|
||||
"github.com/grafana/grafana/pkg/tsdb"
|
||||
_ "github.com/lib/pq"
|
||||
. "github.com/smartystreets/goconvey/convey"
|
||||
)
|
||||
|
||||
// To run this test, remove the Skip from SkipConvey
|
||||
// and set up a PostgreSQL db named grafanatest and a user/password grafanatest/grafanatest
|
||||
func TestPostgres(t *testing.T) {
|
||||
SkipConvey("PostgreSQL", t, func() {
|
||||
x := InitPostgresTestDB(t)
|
||||
|
||||
endpoint := &PostgresQueryEndpoint{
|
||||
sqlEngine: &tsdb.DefaultSqlEngine{
|
||||
MacroEngine: NewPostgresMacroEngine(),
|
||||
XormEngine: x,
|
||||
},
|
||||
log: log.New("tsdb.postgres"),
|
||||
}
|
||||
|
||||
sess := x.NewSession()
|
||||
defer sess.Close()
|
||||
|
||||
sql := `
|
||||
CREATE TABLE postgres_types(
|
||||
c00_smallint smallint,
|
||||
c01_integer integer,
|
||||
c02_bigint bigint,
|
||||
|
||||
c03_real real,
|
||||
c04_double double precision,
|
||||
c05_decimal decimal(10,2),
|
||||
c06_numeric numeric(10,2),
|
||||
|
||||
c07_char char(10),
|
||||
c08_varchar varchar(10),
|
||||
c09_text text,
|
||||
|
||||
c10_timestamp timestamp without time zone,
|
||||
c11_timestamptz timestamp with time zone,
|
||||
c12_date date,
|
||||
c13_time time without time zone,
|
||||
c14_timetz time with time zone,
|
||||
c15_interval interval
|
||||
);
|
||||
`
|
||||
_, err := sess.Exec(sql)
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
sql = `
|
||||
INSERT INTO postgres_types VALUES(
|
||||
1,2,3,
|
||||
4.5,6.7,1.1,1.2,
|
||||
'char10','varchar10','text',
|
||||
|
||||
now(),now(),now(),now(),now(),'15m'::interval
|
||||
);
|
||||
`
|
||||
_, err = sess.Exec(sql)
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
Convey("Query with Table format should map PostgreSQL column types to Go types", func() {
|
||||
query := &tsdb.TsdbQuery{
|
||||
Queries: []*tsdb.Query{
|
||||
{
|
||||
Model: simplejson.NewFromAny(map[string]interface{}{
|
||||
"rawSql": "SELECT * FROM postgres_types",
|
||||
"format": "table",
|
||||
}),
|
||||
RefId: "A",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
resp, err := endpoint.Query(nil, nil, query)
|
||||
queryResult := resp.Results["A"]
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
column := queryResult.Tables[0].Rows[0]
|
||||
So(column[0].(int64), ShouldEqual, 1)
|
||||
So(column[1].(int64), ShouldEqual, 2)
|
||||
So(column[2].(int64), ShouldEqual, 3)
|
||||
So(column[3].(float64), ShouldEqual, 4.5)
|
||||
So(column[4].(float64), ShouldEqual, 6.7)
|
||||
// libpq doesnt properly convert decimal, numeric and char to go types but returns []uint8 instead
|
||||
// So(column[5].(float64), ShouldEqual, 1.1)
|
||||
// So(column[6].(float64), ShouldEqual, 1.2)
|
||||
// So(column[7].(string), ShouldEqual, "char")
|
||||
So(column[8].(string), ShouldEqual, "varchar10")
|
||||
So(column[9].(string), ShouldEqual, "text")
|
||||
|
||||
So(column[10].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
So(column[11].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
So(column[12].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
So(column[13].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
So(column[14].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
|
||||
// libpq doesnt properly convert interval to go types but returns []uint8 instead
|
||||
// So(column[15].(time.Time), ShouldHaveSameTypeAs, time.Now())
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func InitPostgresTestDB(t *testing.T) *xorm.Engine {
|
||||
x, err := xorm.NewEngine(sqlutil.TestDB_Postgres.DriverName, sqlutil.TestDB_Postgres.ConnStr)
|
||||
|
||||
// x.ShowSQL()
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to init postgres db %v", err)
|
||||
}
|
||||
|
||||
sqlutil.CleanDB(x)
|
||||
|
||||
return x
|
||||
}
|
134
pkg/tsdb/sql_engine.go
Normal file
134
pkg/tsdb/sql_engine.go
Normal file
@ -0,0 +1,134 @@
|
||||
package tsdb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
"github.com/go-xorm/core"
|
||||
"github.com/go-xorm/xorm"
|
||||
"github.com/grafana/grafana/pkg/components/simplejson"
|
||||
"github.com/grafana/grafana/pkg/models"
|
||||
)
|
||||
|
||||
// SqlEngine is a wrapper class around xorm for relational database data sources.
|
||||
type SqlEngine interface {
|
||||
InitEngine(driverName string, dsInfo *models.DataSource, cnnstr string) error
|
||||
Query(
|
||||
ctx context.Context,
|
||||
ds *models.DataSource,
|
||||
query *TsdbQuery,
|
||||
transformToTimeSeries func(query *Query, rows *core.Rows, result *QueryResult) error,
|
||||
transformToTable func(query *Query, rows *core.Rows, result *QueryResult) error,
|
||||
) (*Response, error)
|
||||
}
|
||||
|
||||
// SqlMacroEngine interpolates macros into sql. It takes in the timeRange to be able to
|
||||
// generate queries that use from and to.
|
||||
type SqlMacroEngine interface {
|
||||
Interpolate(timeRange *TimeRange, sql string) (string, error)
|
||||
}
|
||||
|
||||
type DefaultSqlEngine struct {
|
||||
MacroEngine SqlMacroEngine
|
||||
XormEngine *xorm.Engine
|
||||
}
|
||||
|
||||
type engineCacheType struct {
|
||||
cache map[int64]*xorm.Engine
|
||||
versions map[int64]int
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
var engineCache = engineCacheType{
|
||||
cache: make(map[int64]*xorm.Engine),
|
||||
versions: make(map[int64]int),
|
||||
}
|
||||
|
||||
// InitEngine creates the db connection and inits the xorm engine or loads it from the engine cache
|
||||
func (e *DefaultSqlEngine) InitEngine(driverName string, dsInfo *models.DataSource, cnnstr string) error {
|
||||
engineCache.Lock()
|
||||
defer engineCache.Unlock()
|
||||
|
||||
if engine, present := engineCache.cache[dsInfo.Id]; present {
|
||||
if version, _ := engineCache.versions[dsInfo.Id]; version == dsInfo.Version {
|
||||
e.XormEngine = engine
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
engine, err := xorm.NewEngine(driverName, cnnstr)
|
||||
engine.SetMaxOpenConns(10)
|
||||
engine.SetMaxIdleConns(10)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
engineCache.cache[dsInfo.Id] = engine
|
||||
e.XormEngine = engine
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Query is a default implementation of the Query method for an SQL data source.
|
||||
// The caller of this function must implement transformToTimeSeries and transformToTable and
|
||||
// pass them in as parameters.
|
||||
func (e *DefaultSqlEngine) Query(
|
||||
ctx context.Context,
|
||||
dsInfo *models.DataSource,
|
||||
tsdbQuery *TsdbQuery,
|
||||
transformToTimeSeries func(query *Query, rows *core.Rows, result *QueryResult) error,
|
||||
transformToTable func(query *Query, rows *core.Rows, result *QueryResult) error,
|
||||
) (*Response, error) {
|
||||
result := &Response{
|
||||
Results: make(map[string]*QueryResult),
|
||||
}
|
||||
|
||||
session := e.XormEngine.NewSession()
|
||||
defer session.Close()
|
||||
db := session.DB()
|
||||
|
||||
for _, query := range tsdbQuery.Queries {
|
||||
rawSql := query.Model.Get("rawSql").MustString()
|
||||
if rawSql == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
queryResult := &QueryResult{Meta: simplejson.New(), RefId: query.RefId}
|
||||
result.Results[query.RefId] = queryResult
|
||||
|
||||
rawSql, err := e.MacroEngine.Interpolate(tsdbQuery.TimeRange, rawSql)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
|
||||
queryResult.Meta.Set("sql", rawSql)
|
||||
|
||||
rows, err := db.Query(rawSql)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
|
||||
defer rows.Close()
|
||||
|
||||
format := query.Model.Get("format").MustString("time_series")
|
||||
|
||||
switch format {
|
||||
case "time_series":
|
||||
err := transformToTimeSeries(query, rows, queryResult)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
case "table":
|
||||
err := transformToTable(query, rows, queryResult)
|
||||
if err != nil {
|
||||
queryResult.Error = err
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
299
public/app/core/specs/time_series_specs.ts
Normal file
299
public/app/core/specs/time_series_specs.ts
Normal file
@ -0,0 +1,299 @@
|
||||
import {describe, beforeEach, it, expect} from 'test/lib/common';
|
||||
import TimeSeries from 'app/core/time_series2';
|
||||
|
||||
describe("TimeSeries", function() {
|
||||
var points, series;
|
||||
var yAxisFormats = ['short', 'ms'];
|
||||
var testData;
|
||||
|
||||
beforeEach(function() {
|
||||
testData = {
|
||||
alias: 'test',
|
||||
datapoints: [
|
||||
[1,2],[null,3],[10,4],[8,5]
|
||||
]
|
||||
};
|
||||
});
|
||||
|
||||
describe('when getting flot pairs', function() {
|
||||
it('with connected style, should ignore nulls', function() {
|
||||
series = new TimeSeries(testData);
|
||||
points = series.getFlotPairs('connected', yAxisFormats);
|
||||
expect(points.length).to.be(3);
|
||||
});
|
||||
|
||||
it('with null as zero style, should replace nulls with zero', function() {
|
||||
series = new TimeSeries(testData);
|
||||
points = series.getFlotPairs('null as zero', yAxisFormats);
|
||||
expect(points.length).to.be(4);
|
||||
expect(points[1][1]).to.be(0);
|
||||
});
|
||||
|
||||
it('if last is null current should pick next to last', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[10,1], [null, 2]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.current).to.be(10);
|
||||
});
|
||||
|
||||
it('max value should work for negative values', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[-10,1], [-4, 2]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.max).to.be(-4);
|
||||
});
|
||||
|
||||
it('average value should ignore nulls', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.avg).to.be(6.333333333333333);
|
||||
});
|
||||
|
||||
it('the delta value should account for nulls', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[3,3],[null,4],[10,5],[15,6]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(14);
|
||||
});
|
||||
|
||||
it('the delta value should account for nulls on first', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[null,2],[1,3],[10,4],[15,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(14);
|
||||
});
|
||||
|
||||
it('the delta value should account for nulls on last', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[5,3],[10,4],[null,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(9);
|
||||
});
|
||||
|
||||
it('the delta value should account for resets', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[5,3],[10,4],[0,5],[10,6]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(19);
|
||||
});
|
||||
|
||||
it('the delta value should account for resets on last', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[2,3],[10,4],[8,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(17);
|
||||
});
|
||||
|
||||
it('the range value should be max - min', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.range).to.be(9);
|
||||
});
|
||||
|
||||
it('first value should ingone nulls', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.first).to.be(1);
|
||||
series = new TimeSeries({
|
||||
datapoints: [[null,2],[1,3],[10,4],[8,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.first).to.be(1);
|
||||
});
|
||||
|
||||
it('with null as zero style, average value should treat nulls as 0', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null as zero', yAxisFormats);
|
||||
expect(series.stats.avg).to.be(4.75);
|
||||
});
|
||||
|
||||
it('average value should be null if all values is null', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[null,2],[null,3],[null,4],[null,5]]
|
||||
});
|
||||
series.getFlotPairs('null');
|
||||
expect(series.stats.avg).to.be(null);
|
||||
});
|
||||
});
|
||||
|
||||
describe('When checking if ms resolution is needed', function() {
|
||||
describe('msResolution with second resolution timestamps', function() {
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries({datapoints: [[45, 1234567890], [60, 1234567899]]});
|
||||
});
|
||||
|
||||
it('should set hasMsResolution to false', function() {
|
||||
expect(series.hasMsResolution).to.be(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('msResolution with millisecond resolution timestamps', function() {
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries({datapoints: [[55, 1236547890001], [90, 1234456709000]]});
|
||||
});
|
||||
|
||||
it('should show millisecond resolution tooltip', function() {
|
||||
expect(series.hasMsResolution).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('msResolution with millisecond resolution timestamps but with trailing zeroes', function() {
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries({datapoints: [[45, 1234567890000], [60, 1234567899000]]});
|
||||
});
|
||||
|
||||
it('should not show millisecond resolution tooltip', function() {
|
||||
expect(series.hasMsResolution).to.be(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('can detect if series contains ms precision', function() {
|
||||
var fakedata;
|
||||
|
||||
beforeEach(function() {
|
||||
fakedata = testData;
|
||||
});
|
||||
|
||||
it('missing datapoint with ms precision', function() {
|
||||
fakedata.datapoints[0] = [1337, 1234567890000];
|
||||
series = new TimeSeries(fakedata);
|
||||
expect(series.isMsResolutionNeeded()).to.be(false);
|
||||
});
|
||||
|
||||
it('contains datapoint with ms precision', function() {
|
||||
fakedata.datapoints[0] = [1337, 1236547890001];
|
||||
series = new TimeSeries(fakedata);
|
||||
expect(series.isMsResolutionNeeded()).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series overrides', function() {
|
||||
var series;
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries(testData);
|
||||
});
|
||||
|
||||
describe('fill & points', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', fill: 0, points: true }]);
|
||||
});
|
||||
|
||||
it('should set fill zero, and enable points', function() {
|
||||
expect(series.lines.fill).to.be(0.001);
|
||||
expect(series.points.show).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, bars, true & lines false', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', bars: true, lines: false }]);
|
||||
});
|
||||
|
||||
it('should disable lines, and enable bars', function() {
|
||||
expect(series.lines.show).to.be(false);
|
||||
expect(series.bars.show).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, linewidth, stack', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', linewidth: 5, stack: false }]);
|
||||
});
|
||||
|
||||
it('should disable stack, and set lineWidth', function() {
|
||||
expect(series.stack).to.be(false);
|
||||
expect(series.lines.lineWidth).to.be(5);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, dashes and lineWidth', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', linewidth: 5, dashes: true }]);
|
||||
});
|
||||
|
||||
it('should enable dashes, set dashes lineWidth to 5 and lines lineWidth to 0', function() {
|
||||
expect(series.dashes.show).to.be(true);
|
||||
expect(series.dashes.lineWidth).to.be(5);
|
||||
expect(series.lines.lineWidth).to.be(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, fill below to', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', fillBelowTo: 'min' }]);
|
||||
});
|
||||
|
||||
it('should disable line fill and add fillBelowTo', function() {
|
||||
expect(series.fillBelowTo).to.be('min');
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, pointradius, steppedLine', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', pointradius: 5, steppedLine: true }]);
|
||||
});
|
||||
|
||||
it('should set pointradius, and set steppedLine', function() {
|
||||
expect(series.points.radius).to.be(5);
|
||||
expect(series.lines.steps).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('override match on regex', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test_01';
|
||||
series.applySeriesOverrides([{ alias: '/.*01/', lines: false }]);
|
||||
});
|
||||
|
||||
it('should match second series', function() {
|
||||
expect(series.lines.show).to.be(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('override series y-axis, and z-index', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', yaxis: 2, zindex: 2 }]);
|
||||
});
|
||||
|
||||
it('should set yaxis', function() {
|
||||
expect(series.yaxis).to.be(2);
|
||||
});
|
||||
|
||||
it('should set zindex', function() {
|
||||
expect(series.zindex).to.be(2);
|
||||
});
|
||||
});
|
||||
|
||||
});
|
||||
|
||||
describe('value formatter', function() {
|
||||
var series;
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries(testData);
|
||||
});
|
||||
|
||||
it('should format non-numeric values as empty string', function() {
|
||||
expect(series.formatValue(null)).to.be("");
|
||||
expect(series.formatValue(undefined)).to.be("");
|
||||
expect(series.formatValue(NaN)).to.be("");
|
||||
expect(series.formatValue(Infinity)).to.be("");
|
||||
expect(series.formatValue(-Infinity)).to.be("");
|
||||
});
|
||||
});
|
||||
|
||||
});
|
@ -203,7 +203,7 @@ export default class TimeSeries {
|
||||
if (this.stats.max === -Number.MAX_VALUE) { this.stats.max = null; }
|
||||
if (this.stats.min === Number.MAX_VALUE) { this.stats.min = null; }
|
||||
|
||||
if (result.length) {
|
||||
if (result.length && !this.allIsNull) {
|
||||
this.stats.avg = (this.stats.total / nonNulls);
|
||||
this.stats.current = result[result.length-1][1];
|
||||
if (this.stats.current === null && result.length > 1) {
|
||||
@ -228,6 +228,9 @@ export default class TimeSeries {
|
||||
}
|
||||
|
||||
formatValue(value) {
|
||||
if (!_.isFinite(value)) {
|
||||
value = null; // Prevent NaN formatting
|
||||
}
|
||||
return this.valueFormater(value, this.decimals, this.scaledDecimals);
|
||||
}
|
||||
|
||||
|
@ -52,14 +52,14 @@ var template = `
|
||||
<h3 class="page-heading">Preferences</h3>
|
||||
|
||||
<div class="gf-form">
|
||||
<span class="gf-form-label width-10">UI Theme</span>
|
||||
<span class="gf-form-label width-11">UI Theme</span>
|
||||
<div class="gf-form-select-wrapper max-width-20">
|
||||
<select class="gf-form-input" ng-model="ctrl.prefs.theme" ng-options="f.value as f.text for f in ctrl.themes"></select>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="gf-form">
|
||||
<span class="gf-form-label width-10">
|
||||
<span class="gf-form-label width-11">
|
||||
Home Dashboard
|
||||
<info-popover mode="right-normal">
|
||||
Not finding dashboard you want? Star it first, then it should appear in this select box.
|
||||
@ -70,7 +70,7 @@ var template = `
|
||||
</div>
|
||||
|
||||
<div class="gf-form">
|
||||
<label class="gf-form-label width-10">Timezone</label>
|
||||
<label class="gf-form-label width-11">Timezone</label>
|
||||
<div class="gf-form-select-wrapper max-width-20">
|
||||
<select class="gf-form-input" ng-model="ctrl.prefs.timezone" ng-options="f.value as f.text for f in ctrl.timezones"></select>
|
||||
</div>
|
||||
|
@ -6,6 +6,7 @@ import * as grafanaPlugin from 'app/plugins/datasource/grafana/module';
|
||||
import * as influxdbPlugin from 'app/plugins/datasource/influxdb/module';
|
||||
import * as mixedPlugin from 'app/plugins/datasource/mixed/module';
|
||||
import * as mysqlPlugin from 'app/plugins/datasource/mysql/module';
|
||||
import * as postgresPlugin from 'app/plugins/datasource/postgres/module';
|
||||
import * as prometheusPlugin from 'app/plugins/datasource/prometheus/module';
|
||||
|
||||
import * as textPanel from 'app/plugins/panel/text/module';
|
||||
@ -29,6 +30,7 @@ const builtInPlugins = {
|
||||
"app/plugins/datasource/influxdb/module": influxdbPlugin,
|
||||
"app/plugins/datasource/mixed/module": mixedPlugin,
|
||||
"app/plugins/datasource/mysql/module": mysqlPlugin,
|
||||
"app/plugins/datasource/postgres/module": postgresPlugin,
|
||||
"app/plugins/datasource/prometheus/module": prometheusPlugin,
|
||||
"app/plugins/app/testdata/module": testDataAppPlugin,
|
||||
"app/plugins/app/testdata/datasource/module": testDataDSPlugin,
|
||||
|
@ -8,11 +8,12 @@ import jquery from 'jquery';
|
||||
import config from 'app/core/config';
|
||||
import TimeSeries from 'app/core/time_series2';
|
||||
import TableModel from 'app/core/table_model';
|
||||
import appEvents from 'app/core/app_events';
|
||||
import {coreModule, appEvents, contextSrv} from 'app/core/core';
|
||||
import {Observable} from 'rxjs/Observable';
|
||||
import {Subject} from 'rxjs/Subject';
|
||||
import * as datemath from 'app/core/utils/datemath';
|
||||
import builtInPlugins from './buit_in_plugins';
|
||||
import d3 from 'vendor/d3/d3';
|
||||
|
||||
System.config({
|
||||
baseURL: 'public',
|
||||
@ -50,6 +51,7 @@ exposeToPlugin('jquery', jquery);
|
||||
exposeToPlugin('angular', angular);
|
||||
exposeToPlugin('rxjs/Subject', Subject);
|
||||
exposeToPlugin('rxjs/Observable', Observable);
|
||||
exposeToPlugin('d3', d3);
|
||||
|
||||
exposeToPlugin('app/plugins/sdk', sdk);
|
||||
exposeToPlugin('app/core/utils/datemath', datemath);
|
||||
@ -59,6 +61,13 @@ exposeToPlugin('app/core/time_series', TimeSeries);
|
||||
exposeToPlugin('app/core/time_series2', TimeSeries);
|
||||
exposeToPlugin('app/core/table_model', TableModel);
|
||||
exposeToPlugin('app/core/app_events', appEvents);
|
||||
exposeToPlugin('app/core/core_module', coreModule);
|
||||
exposeToPlugin('app/core/core_module', coreModule);
|
||||
exposeToPlugin('app/core/core', {
|
||||
coreModule: coreModule,
|
||||
appEvents: appEvents,
|
||||
contextSrv: contextSrv,
|
||||
});
|
||||
|
||||
import 'vendor/flot/jquery.flot';
|
||||
import 'vendor/flot/jquery.flot.selection';
|
||||
|
@ -20,7 +20,15 @@ export class MysqlDatasource {
|
||||
return '\'' + value + '\'';
|
||||
}
|
||||
|
||||
if (typeof value === 'number') {
|
||||
return value;
|
||||
}
|
||||
|
||||
var quotedValues = _.map(value, function(val) {
|
||||
if (typeof value === 'number') {
|
||||
return value;
|
||||
}
|
||||
|
||||
return '\'' + val + '\'';
|
||||
});
|
||||
return quotedValues.join(',');
|
||||
|
@ -9,17 +9,17 @@
|
||||
|
||||
<div class="gf-form max-width-30">
|
||||
<span class="gf-form-label width-7">Database</span>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.database' placeholder="" required></input>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.database' placeholder="database name" required></input>
|
||||
</div>
|
||||
|
||||
<div class="gf-form-inline">
|
||||
<div class="gf-form max-width-15">
|
||||
<span class="gf-form-label width-7">User</span>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.user' placeholder=""></input>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.user' placeholder="user"></input>
|
||||
</div>
|
||||
<div class="gf-form max-width-15">
|
||||
<span class="gf-form-label width-7">Password</span>
|
||||
<input type="password" class="gf-form-input" ng-model='ctrl.current.password' placeholder=""></input>
|
||||
<input type="password" class="gf-form-input" ng-model='ctrl.current.password' placeholder="password"></input>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -49,6 +49,7 @@ Macros:
|
||||
- $__time(column) -> UNIX_TIMESTAMP(column) as time_sec
|
||||
- $__timeFilter(column) -> UNIX_TIMESTAMP(time_date_time) ≥ 1492750877 AND UNIX_TIMESTAMP(time_date_time) ≤ 1492750877
|
||||
- $__unixEpochFilter(column) -> time_unix_epoch > 1492750877 AND time_unix_epoch < 1492750877
|
||||
- $__timeGroup(column,'5m') -> (extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int
|
||||
|
||||
Or build your own conditionals using these macros which just return the values:
|
||||
- $__timeFrom() -> FROM_UNIXTIME(1492750877)
|
||||
|
@ -113,7 +113,7 @@ export default class ResponseParser {
|
||||
if (table.columns[i].text === 'time_sec') {
|
||||
timeColumnIndex = i;
|
||||
} else if (table.columns[i].text === 'title') {
|
||||
return this.$q.reject({message: 'Title return column on annotations are depricated, return only a column named text'});
|
||||
return this.$q.reject({message: 'The title column for annotations is deprecated, now only a column named text is returned'});
|
||||
} else if (table.columns[i].text === 'text') {
|
||||
textColumnIndex = i;
|
||||
} else if (table.columns[i].text === 'tags') {
|
||||
|
@ -193,4 +193,24 @@ describe('MySQLDatasource', function() {
|
||||
expect(results[0].value).to.be('same');
|
||||
});
|
||||
});
|
||||
|
||||
describe('When interpolating variables', () => {
|
||||
describe('and value is a string', () => {
|
||||
it('should return a quoted value', () => {
|
||||
expect(ctx.ds.interpolateVariable('abc')).to.eql('\'abc\'');
|
||||
});
|
||||
});
|
||||
|
||||
describe('and value is a number', () => {
|
||||
it('should return an unquoted value', () => {
|
||||
expect(ctx.ds.interpolateVariable(1000)).to.eql(1000);
|
||||
});
|
||||
});
|
||||
|
||||
describe('and value is an array of strings', () => {
|
||||
it('should return comma separated quoted values', () => {
|
||||
expect(ctx.ds.interpolateVariable(['a', 'b', 'c'])).to.eql('\'a\',\'b\',\'c\'');
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
3
public/app/plugins/datasource/postgres/README.md
Normal file
3
public/app/plugins/datasource/postgres/README.md
Normal file
@ -0,0 +1,3 @@
|
||||
# Grafana PostgreSQL Datasource - Native Plugin
|
||||
|
||||
This is the built in PostgreSQL Datasource that is used to connect to PostgreSQL databases.
|
132
public/app/plugins/datasource/postgres/datasource.ts
Normal file
132
public/app/plugins/datasource/postgres/datasource.ts
Normal file
@ -0,0 +1,132 @@
|
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import _ from 'lodash';
|
||||
import ResponseParser from './response_parser';
|
||||
|
||||
export class PostgresDatasource {
|
||||
id: any;
|
||||
name: any;
|
||||
responseParser: ResponseParser;
|
||||
|
||||
/** @ngInject **/
|
||||
constructor(instanceSettings, private backendSrv, private $q, private templateSrv) {
|
||||
this.name = instanceSettings.name;
|
||||
this.id = instanceSettings.id;
|
||||
this.responseParser = new ResponseParser(this.$q);
|
||||
}
|
||||
|
||||
interpolateVariable(value) {
|
||||
if (typeof value === 'string') {
|
||||
return '\'' + value + '\'';
|
||||
}
|
||||
|
||||
var quotedValues = _.map(value, function(val) {
|
||||
return '\'' + val + '\'';
|
||||
});
|
||||
return quotedValues.join(',');
|
||||
}
|
||||
|
||||
query(options) {
|
||||
var queries = _.filter(options.targets, item => {
|
||||
return item.hide !== true;
|
||||
}).map(item => {
|
||||
return {
|
||||
refId: item.refId,
|
||||
intervalMs: options.intervalMs,
|
||||
maxDataPoints: options.maxDataPoints,
|
||||
datasourceId: this.id,
|
||||
rawSql: this.templateSrv.replace(item.rawSql, options.scopedVars, this.interpolateVariable),
|
||||
format: item.format,
|
||||
};
|
||||
});
|
||||
|
||||
if (queries.length === 0) {
|
||||
return this.$q.when({data: []});
|
||||
}
|
||||
|
||||
return this.backendSrv.datasourceRequest({
|
||||
url: '/api/tsdb/query',
|
||||
method: 'POST',
|
||||
data: {
|
||||
from: options.range.from.valueOf().toString(),
|
||||
to: options.range.to.valueOf().toString(),
|
||||
queries: queries,
|
||||
}
|
||||
}).then(this.responseParser.processQueryResult);
|
||||
}
|
||||
|
||||
annotationQuery(options) {
|
||||
if (!options.annotation.rawQuery) {
|
||||
return this.$q.reject({message: 'Query missing in annotation definition'});
|
||||
}
|
||||
|
||||
const query = {
|
||||
refId: options.annotation.name,
|
||||
datasourceId: this.id,
|
||||
rawSql: this.templateSrv.replace(options.annotation.rawQuery, options.scopedVars, this.interpolateVariable),
|
||||
format: 'table',
|
||||
};
|
||||
|
||||
return this.backendSrv.datasourceRequest({
|
||||
url: '/api/tsdb/query',
|
||||
method: 'POST',
|
||||
data: {
|
||||
from: options.range.from.valueOf().toString(),
|
||||
to: options.range.to.valueOf().toString(),
|
||||
queries: [query],
|
||||
}
|
||||
}).then(data => this.responseParser.transformAnnotationResponse(options, data));
|
||||
}
|
||||
|
||||
metricFindQuery(query, optionalOptions) {
|
||||
let refId = 'tempvar';
|
||||
if (optionalOptions && optionalOptions.variable && optionalOptions.variable.name) {
|
||||
refId = optionalOptions.variable.name;
|
||||
}
|
||||
|
||||
const interpolatedQuery = {
|
||||
refId: refId,
|
||||
datasourceId: this.id,
|
||||
rawSql: this.templateSrv.replace(query, {}, this.interpolateVariable),
|
||||
format: 'table',
|
||||
};
|
||||
|
||||
return this.backendSrv.datasourceRequest({
|
||||
url: '/api/tsdb/query',
|
||||
method: 'POST',
|
||||
data: {
|
||||
queries: [interpolatedQuery],
|
||||
}
|
||||
})
|
||||
.then(data => this.responseParser.parseMetricFindQueryResult(refId, data));
|
||||
}
|
||||
|
||||
testDatasource() {
|
||||
return this.backendSrv.datasourceRequest({
|
||||
url: '/api/tsdb/query',
|
||||
method: 'POST',
|
||||
data: {
|
||||
from: '5m',
|
||||
to: 'now',
|
||||
queries: [{
|
||||
refId: 'A',
|
||||
intervalMs: 1,
|
||||
maxDataPoints: 1,
|
||||
datasourceId: this.id,
|
||||
rawSql: "SELECT 1",
|
||||
format: 'table',
|
||||
}],
|
||||
}
|
||||
}).then(res => {
|
||||
return { status: "success", message: "Database Connection OK"};
|
||||
}).catch(err => {
|
||||
console.log(err);
|
||||
if (err.data && err.data.message) {
|
||||
return { status: "error", message: err.data.message };
|
||||
} else {
|
||||
return { status: "error", message: err.status };
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,22 @@
|
||||
<?xml version="1.0"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
|
||||
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
|
||||
<svg width="432.071pt" height="445.383pt" viewBox="0 0 432.071 445.383" xml:space="preserve" xmlns="http://www.w3.org/2000/svg">
|
||||
<g id="orginal" style="fill-rule:nonzero;clip-rule:nonzero;stroke:#000000;stroke-miterlimit:4;">
|
||||
</g>
|
||||
<g id="Layer_x0020_3" style="fill-rule:nonzero;clip-rule:nonzero;fill:none;stroke:#FFFFFF;stroke-width:12.4651;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;">
|
||||
<path style="fill:#000000;stroke:#000000;stroke-width:37.3953;stroke-linecap:butt;stroke-linejoin:miter;" d="M323.205,324.227c2.833-23.601,1.984-27.062,19.563-23.239l4.463,0.392c13.517,0.615,31.199-2.174,41.587-7c22.362-10.376,35.622-27.7,13.572-23.148c-50.297,10.376-53.755-6.655-53.755-6.655c53.111-78.803,75.313-178.836,56.149-203.322 C352.514-5.534,262.036,26.049,260.522,26.869l-0.482,0.089c-9.938-2.062-21.06-3.294-33.554-3.496c-22.761-0.374-40.032,5.967-53.133,15.904c0,0-161.408-66.498-153.899,83.628c1.597,31.936,45.777,241.655,98.47,178.31 c19.259-23.163,37.871-42.748,37.871-42.748c9.242,6.14,20.307,9.272,31.912,8.147l0.897-0.765c-0.281,2.876-0.157,5.689,0.359,9.019c-13.572,15.167-9.584,17.83-36.723,23.416c-27.457,5.659-11.326,15.734-0.797,18.367c12.768,3.193,42.305,7.716,62.268-20.224 l-0.795,3.188c5.325,4.26,4.965,30.619,5.72,49.452c0.756,18.834,2.017,36.409,5.856,46.771c3.839,10.36,8.369,37.05,44.036,29.406c29.809-6.388,52.6-15.582,54.677-101.107"/>
|
||||
<path style="fill:#336791;stroke:none;" d="M402.395,271.23c-50.302,10.376-53.76-6.655-53.76-6.655c53.111-78.808,75.313-178.843,56.153-203.326c-52.27-66.785-142.752-35.2-144.262-34.38l-0.486,0.087c-9.938-2.063-21.06-3.292-33.56-3.496c-22.761-0.373-40.026,5.967-53.127,15.902 c0,0-161.411-66.495-153.904,83.63c1.597,31.938,45.776,241.657,98.471,178.312c19.26-23.163,37.869-42.748,37.869-42.748c9.243,6.14,20.308,9.272,31.908,8.147l0.901-0.765c-0.28,2.876-0.152,5.689,0.361,9.019c-13.575,15.167-9.586,17.83-36.723,23.416 c-27.459,5.659-11.328,15.734-0.796,18.367c12.768,3.193,42.307,7.716,62.266-20.224l-0.796,3.188c5.319,4.26,9.054,27.711,8.428,48.969c-0.626,21.259-1.044,35.854,3.147,47.254c4.191,11.4,8.368,37.05,44.042,29.406c29.809-6.388,45.256-22.942,47.405-50.555 c1.525-19.631,4.976-16.729,5.194-34.28l2.768-8.309c3.192-26.611,0.507-35.196,18.872-31.203l4.463,0.392c13.517,0.615,31.208-2.174,41.591-7c22.358-10.376,35.618-27.7,13.573-23.148z"/>
|
||||
<path d="M215.866,286.484c-1.385,49.516,0.348,99.377,5.193,111.495c4.848,12.118,15.223,35.688,50.9,28.045c29.806-6.39,40.651-18.756,45.357-46.051c3.466-20.082,10.148-75.854,11.005-87.281"/>
|
||||
<path d="M173.104,38.256c0,0-161.521-66.016-154.012,84.109c1.597,31.938,45.779,241.664,98.473,178.316c19.256-23.166,36.671-41.335,36.671-41.335"/>
|
||||
<path d="M260.349,26.207c-5.591,1.753,89.848-34.889,144.087,34.417c19.159,24.484-3.043,124.519-56.153,203.329"/>
|
||||
<path style="stroke-linejoin:bevel;" d="M348.282,263.953c0,0,3.461,17.036,53.764,6.653c22.04-4.552,8.776,12.774-13.577,23.155c-18.345,8.514-59.474,10.696-60.146-1.069c-1.729-30.355,21.647-21.133,19.96-28.739c-1.525-6.85-11.979-13.573-18.894-30.338 c-6.037-14.633-82.796-126.849,21.287-110.183c3.813-0.789-27.146-99.002-124.553-100.599c-97.385-1.597-94.19,119.762-94.19,119.762"/>
|
||||
<path d="M188.604,274.334c-13.577,15.166-9.584,17.829-36.723,23.417c-27.459,5.66-11.326,15.733-0.797,18.365c12.768,3.195,42.307,7.718,62.266-20.229c6.078-8.509-0.036-22.086-8.385-25.547c-4.034-1.671-9.428-3.765-16.361,3.994z"/>
|
||||
<path d="M187.715,274.069c-1.368-8.917,2.93-19.528,7.536-31.942c6.922-18.626,22.893-37.255,10.117-96.339c-9.523-44.029-73.396-9.163-73.436-3.193c-0.039,5.968,2.889,30.26-1.067,58.548c-5.162,36.913,23.488,68.132,56.479,64.938"/>
|
||||
<path style="fill:#FFFFFF;stroke-width:4.155;stroke-linecap:butt;stroke-linejoin:miter;" d="M172.517,141.7c-0.288,2.039,3.733,7.48,8.976,8.207c5.234,0.73,9.714-3.522,9.998-5.559c0.284-2.039-3.732-4.285-8.977-5.015c-5.237-0.731-9.719,0.333-9.996,2.367z"/>
|
||||
<path style="fill:#FFFFFF;stroke-width:2.0775;stroke-linecap:butt;stroke-linejoin:miter;" d="M331.941,137.543c0.284,2.039-3.732,7.48-8.976,8.207c-5.238,0.73-9.718-3.522-10.005-5.559c-0.277-2.039,3.74-4.285,8.979-5.015c5.239-0.73,9.718,0.333,10.002,2.368z"/>
|
||||
<path d="M350.676,123.432c0.863,15.994-3.445,26.888-3.988,43.914c-0.804,24.748,11.799,53.074-7.191,81.435"/>
|
||||
<path style="stroke-width:3;" d="M0,60.232"/>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 4.4 KiB |
103
public/app/plugins/datasource/postgres/mode-sql.js
Normal file
103
public/app/plugins/datasource/postgres/mode-sql.js
Normal file
@ -0,0 +1,103 @@
|
||||
// jshint ignore: start
|
||||
// jscs: disable
|
||||
|
||||
ace.define("ace/mode/sql_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) {
|
||||
"use strict";
|
||||
|
||||
var oop = require("../lib/oop");
|
||||
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules;
|
||||
|
||||
var SqlHighlightRules = function() {
|
||||
|
||||
var keywords = (
|
||||
"select|insert|update|delete|from|where|and|or|group|by|order|limit|offset|having|as|case|" +
|
||||
"when|else|end|type|left|right|join|on|outer|desc|asc|union|create|table|primary|key|if|" +
|
||||
"foreign|not|references|default|null|inner|cross|natural|database|drop|grant"
|
||||
);
|
||||
|
||||
var builtinConstants = (
|
||||
"true|false"
|
||||
);
|
||||
|
||||
var builtinFunctions = (
|
||||
"avg|count|first|last|max|min|sum|upper|lower|substring|char_length|round|rank|now|" +
|
||||
"coalesce"
|
||||
);
|
||||
|
||||
var dataTypes = (
|
||||
"int|int2|int4|int8|numeric|decimal|date|varchar|char|bigint|float|bool|bytea|text|timestamp|" +
|
||||
"time|money|real|integer"
|
||||
);
|
||||
|
||||
var keywordMapper = this.createKeywordMapper({
|
||||
"support.function": builtinFunctions,
|
||||
"keyword": keywords,
|
||||
"constant.language": builtinConstants,
|
||||
"storage.type": dataTypes
|
||||
}, "identifier", true);
|
||||
|
||||
this.$rules = {
|
||||
"start" : [ {
|
||||
token : "comment",
|
||||
regex : "--.*$"
|
||||
}, {
|
||||
token : "comment",
|
||||
start : "/\\*",
|
||||
end : "\\*/"
|
||||
}, {
|
||||
token : "string", // " string
|
||||
regex : '".*?"'
|
||||
}, {
|
||||
token : "string", // ' string
|
||||
regex : "'.*?'"
|
||||
}, {
|
||||
token : "constant.numeric", // float
|
||||
regex : "[+-]?\\d+(?:(?:\\.\\d*)?(?:[eE][+-]?\\d+)?)?\\b"
|
||||
}, {
|
||||
token : keywordMapper,
|
||||
regex : "[a-zA-Z_$][a-zA-Z0-9_$]*\\b"
|
||||
}, {
|
||||
token : "keyword.operator",
|
||||
regex : "\\+|\\-|\\/|\\/\\/|%|<@>|@>|<@|&|\\^|~|<|>|<=|=>|==|!=|<>|="
|
||||
}, {
|
||||
token : "paren.lparen",
|
||||
regex : "[\\(]"
|
||||
}, {
|
||||
token : "paren.rparen",
|
||||
regex : "[\\)]"
|
||||
}, {
|
||||
token : "text",
|
||||
regex : "\\s+"
|
||||
} ]
|
||||
};
|
||||
this.normalizeRules();
|
||||
};
|
||||
|
||||
oop.inherits(SqlHighlightRules, TextHighlightRules);
|
||||
|
||||
exports.SqlHighlightRules = SqlHighlightRules;
|
||||
});
|
||||
|
||||
ace.define("ace/mode/sql",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/sql_highlight_rules"], function(require, exports, module) {
|
||||
"use strict";
|
||||
|
||||
var oop = require("../lib/oop");
|
||||
var TextMode = require("./text").Mode;
|
||||
var SqlHighlightRules = require("./sql_highlight_rules").SqlHighlightRules;
|
||||
|
||||
var Mode = function() {
|
||||
this.HighlightRules = SqlHighlightRules;
|
||||
this.$behaviour = this.$defaultBehaviour;
|
||||
};
|
||||
oop.inherits(Mode, TextMode);
|
||||
|
||||
(function() {
|
||||
|
||||
this.lineCommentStart = "--";
|
||||
|
||||
this.$id = "ace/mode/sql";
|
||||
}).call(Mode.prototype);
|
||||
|
||||
exports.Mode = Mode;
|
||||
|
||||
});
|
45
public/app/plugins/datasource/postgres/module.ts
Normal file
45
public/app/plugins/datasource/postgres/module.ts
Normal file
@ -0,0 +1,45 @@
|
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import {PostgresDatasource} from './datasource';
|
||||
import {PostgresQueryCtrl} from './query_ctrl';
|
||||
|
||||
class PostgresConfigCtrl {
|
||||
static templateUrl = 'partials/config.html';
|
||||
|
||||
current: any;
|
||||
|
||||
/** @ngInject **/
|
||||
constructor($scope) {
|
||||
this.current.jsonData.sslmode = this.current.jsonData.sslmode || 'require';
|
||||
}
|
||||
}
|
||||
|
||||
const defaultQuery = `SELECT
|
||||
extract(epoch from time_column) AS time,
|
||||
title_column as title,
|
||||
description_column as text
|
||||
FROM
|
||||
metric_table
|
||||
WHERE
|
||||
$__timeFilter(time_column)
|
||||
`;
|
||||
|
||||
class PostgresAnnotationsQueryCtrl {
|
||||
static templateUrl = 'partials/annotations.editor.html';
|
||||
|
||||
annotation: any;
|
||||
|
||||
/** @ngInject **/
|
||||
constructor() {
|
||||
this.annotation.rawQuery = this.annotation.rawQuery || defaultQuery;
|
||||
}
|
||||
}
|
||||
|
||||
export {
|
||||
PostgresDatasource,
|
||||
PostgresDatasource as Datasource,
|
||||
PostgresQueryCtrl as QueryCtrl,
|
||||
PostgresConfigCtrl as ConfigCtrl,
|
||||
PostgresAnnotationsQueryCtrl as AnnotationsQueryCtrl,
|
||||
};
|
||||
|
@ -0,0 +1,41 @@
|
||||
|
||||
<div class="gf-form-group">
|
||||
<div class="gf-form-inline">
|
||||
<div class="gf-form gf-form--grow">
|
||||
<textarea rows="10" class="gf-form-input" ng-model="ctrl.annotation.rawQuery" spellcheck="false" placeholder="query expression" data-min-length=0 data-items=100 ng-model-onblur ng-change="ctrl.panelCtrl.refresh()"></textarea>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="gf-form-inline">
|
||||
<div class="gf-form">
|
||||
<label class="gf-form-label query-keyword" ng-click="ctrl.showHelp = !ctrl.showHelp">
|
||||
Show Help
|
||||
<i class="fa fa-caret-down" ng-show="ctrl.showHelp"></i>
|
||||
<i class="fa fa-caret-right" ng-hide="ctrl.showHelp"></i>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="gf-form" ng-show="ctrl.showHelp">
|
||||
<pre class="gf-form-pre alert alert-info"><h6>Annotation Query Format</h6>
|
||||
An annotation is an event that is overlayed on top of graphs. The query can have up to four columns per row, the time column is mandatory. Annotation rendering is expensive so it is important to limit the number of rows returned.
|
||||
|
||||
- column with alias: <b>time</b> for the annotation event. Format is UTC in seconds, use extract(epoch from column) as "time"
|
||||
- column with alias <b>title</b> for the annotation title
|
||||
- column with alias: <b>text</b> for the annotation text
|
||||
- column with alias: <b>tags</b> for annotation tags. This is a comma separated string of tags e.g. 'tag1,tag2'
|
||||
|
||||
|
||||
Macros:
|
||||
- $__time(column) -> column as "time"
|
||||
- $__timeFilter(column) -> column ≥ to_timestamp(1492750877) AND column ≤ to_timestamp(1492750877)
|
||||
- $__unixEpochFilter(column) -> column > 1492750877 AND column < 1492750877
|
||||
|
||||
Or build your own conditionals using these macros which just return the values:
|
||||
- $__timeFrom() -> to_timestamp(1492750877)
|
||||
- $__timeTo() -> to_timestamp(1492750877)
|
||||
- $__unixEpochFrom() -> 1492750877
|
||||
- $__unixEpochTo() -> 1492750877
|
||||
</pre>
|
||||
</div>
|
||||
</div>
|
52
public/app/plugins/datasource/postgres/partials/config.html
Normal file
52
public/app/plugins/datasource/postgres/partials/config.html
Normal file
@ -0,0 +1,52 @@
|
||||
|
||||
<h3 class="page-heading">PostgreSQL Connection</h3>
|
||||
|
||||
<div class="gf-form-group">
|
||||
<div class="gf-form max-width-30">
|
||||
<span class="gf-form-label width-7">Host</span>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.url' placeholder="localhost:5432" bs-typeahead="{{['localhost:5432', 'localhost:5433']}}" required></input>
|
||||
</div>
|
||||
|
||||
<div class="gf-form max-width-30">
|
||||
<span class="gf-form-label width-7">Database</span>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.database' placeholder="database name" required></input>
|
||||
</div>
|
||||
|
||||
<div class="gf-form-inline">
|
||||
<div class="gf-form max-width-15">
|
||||
<span class="gf-form-label width-7">User</span>
|
||||
<input type="text" class="gf-form-input" ng-model='ctrl.current.user' placeholder="user"></input>
|
||||
</div>
|
||||
<div class="gf-form max-width-15" ng-if="!ctrl.current.secureJsonFields.password">
|
||||
<span class="gf-form-label width-7">Password</span>
|
||||
<input type="password" class="gf-form-input" ng-model='ctrl.current.secureJsonData.password' placeholder="password"></input>
|
||||
</div>
|
||||
<div class="gf-form max-width-19" ng-if="ctrl.current.secureJsonFields.password">
|
||||
<span class="gf-form-label width-7">Password</span>
|
||||
<input type="text" class="gf-form-input" disabled="disabled" value="configured">
|
||||
<a class="btn btn-secondary gf-form-btn" href="#" ng-click="ctrl.current.secureJsonFields.password = false">reset</a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="gf-form">
|
||||
<label class="gf-form-label width-7">SSL Mode</label>
|
||||
<div class="gf-form-select-wrapper max-width-15 gf-form-select-wrapper--has-help-icon">
|
||||
<select class="gf-form-input" ng-model="ctrl.current.jsonData.sslmode" ng-options="mode for mode in ['disable', 'require', 'verify-ca', 'verify-full']" ng-init="ctrl.current.jsonData.sslmode"></select>
|
||||
<info-popover mode="right-absolute">
|
||||
This option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server.
|
||||
</info-popover>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="gf-form-group">
|
||||
<div class="grafana-info-box">
|
||||
<h5>User Permission</h5>
|
||||
<p>
|
||||
The database user should only be granted SELECT permissions on the specified database & tables you want to query.
|
||||
Grafana does not validate that queries are safe so queries can contain any SQL statement. For example, statements
|
||||
like <code>DELETE FROM user;</code> and <code>DROP TABLE user;</code> would be executed. To protect against this we
|
||||
<strong>Highly</strong> recommmend you create a specific PostgreSQL user with restricted permissions.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -0,0 +1,79 @@
|
||||
<query-editor-row query-ctrl="ctrl" can-collapse="false">
|
||||
<div class="gf-form-inline">
|
||||
<div class="gf-form gf-form--grow">
|
||||
<code-editor content="ctrl.target.rawSql" datasource="ctrl.datasource" on-change="ctrl.panelCtrl.refresh()" data-mode="sql">
|
||||
</code-editor>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="gf-form-inline">
|
||||
<div class="gf-form">
|
||||
<label class="gf-form-label query-keyword">Format as</label>
|
||||
<div class="gf-form-select-wrapper">
|
||||
<select class="gf-form-input gf-size-auto" ng-model="ctrl.target.format" ng-options="f.value as f.text for f in ctrl.formats" ng-change="ctrl.refresh()"></select>
|
||||
</div>
|
||||
</div>
|
||||
<div class="gf-form">
|
||||
<label class="gf-form-label query-keyword" ng-click="ctrl.showHelp = !ctrl.showHelp">
|
||||
Show Help
|
||||
<i class="fa fa-caret-down" ng-show="ctrl.showHelp"></i>
|
||||
<i class="fa fa-caret-right" ng-hide="ctrl.showHelp"></i>
|
||||
</label>
|
||||
</div>
|
||||
<div class="gf-form" ng-show="ctrl.lastQueryMeta">
|
||||
<label class="gf-form-label query-keyword" ng-click="ctrl.showLastQuerySQL = !ctrl.showLastQuerySQL">
|
||||
Generated SQL
|
||||
<i class="fa fa-caret-down" ng-show="ctrl.showLastQuerySQL"></i>
|
||||
<i class="fa fa-caret-right" ng-hide="ctrl.showLastQuerySQL"></i>
|
||||
</label>
|
||||
</div>
|
||||
<div class="gf-form gf-form--grow">
|
||||
<div class="gf-form-label gf-form-label--grow"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="gf-form" ng-show="ctrl.showLastQuerySQL">
|
||||
<pre class="gf-form-pre">{{ctrl.lastQueryMeta.sql}}</pre>
|
||||
</div>
|
||||
|
||||
<div class="gf-form" ng-show="ctrl.showHelp">
|
||||
<pre class="gf-form-pre alert alert-info">Time series:
|
||||
- return column named <i>time</i> (UTC in seconds or timestamp)
|
||||
- return column(s) with numeric datatype as values
|
||||
- (Optional: return column named <i>metric</i> to represent the series name. If no column named metric is found the column name of the value column is used as series name)
|
||||
|
||||
Table:
|
||||
- return any set of columns
|
||||
|
||||
Macros:
|
||||
- $__time(column) -> column as "time"
|
||||
- $__timeEpoch -> extract(epoch from column) as "time"
|
||||
- $__timeFilter(column) -> column ≥ to_timestamp(1492750877) AND column ≤ to_timestamp(1492750877)
|
||||
- $__unixEpochFilter(column) -> column > 1492750877 AND column < 1492750877
|
||||
|
||||
To group by time use $__timeGroup:
|
||||
-> (extract(epoch from column)/extract(epoch from column::interval))::int
|
||||
|
||||
Example of group by and order by with $__timeGroup:
|
||||
SELECT
|
||||
min(date_time_col) AS time_sec,
|
||||
sum(value_double) as value
|
||||
FROM yourtable
|
||||
group by $__timeGroup(date_time_col, '1h')
|
||||
order by $__timeGroup(date_time_col, '1h') ASC
|
||||
|
||||
Or build your own conditionals using these macros which just return the values:
|
||||
- $__timeFrom() -> to_timestamp(1492750877)
|
||||
- $__timeTo() -> to_timestamp(1492750877)
|
||||
- $__unixEpochFrom() -> 1492750877
|
||||
- $__unixEpochTo() -> 1492750877
|
||||
</pre>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="gf-form" ng-show="ctrl.lastQueryError">
|
||||
<pre class="gf-form-pre alert alert-error">{{ctrl.lastQueryError}}</pre>
|
||||
</div>
|
||||
|
||||
</query-editor-row>
|
20
public/app/plugins/datasource/postgres/plugin.json
Normal file
20
public/app/plugins/datasource/postgres/plugin.json
Normal file
@ -0,0 +1,20 @@
|
||||
{
|
||||
"type": "datasource",
|
||||
"name": "PostgreSQL",
|
||||
"id": "postgres",
|
||||
|
||||
"info": {
|
||||
"author": {
|
||||
"name": "Grafana Project",
|
||||
"url": "https://grafana.com"
|
||||
},
|
||||
"logos": {
|
||||
"small": "img/postgresql_logo.svg",
|
||||
"large": "img/postgresql_logo.svg"
|
||||
}
|
||||
},
|
||||
|
||||
"alerting": true,
|
||||
"annotations": true,
|
||||
"metrics": true
|
||||
}
|
84
public/app/plugins/datasource/postgres/query_ctrl.ts
Normal file
84
public/app/plugins/datasource/postgres/query_ctrl.ts
Normal file
@ -0,0 +1,84 @@
|
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import _ from 'lodash';
|
||||
import {QueryCtrl} from 'app/plugins/sdk';
|
||||
|
||||
export interface PostgresQuery {
|
||||
refId: string;
|
||||
format: string;
|
||||
alias: string;
|
||||
rawSql: string;
|
||||
}
|
||||
|
||||
export interface QueryMeta {
|
||||
sql: string;
|
||||
}
|
||||
|
||||
|
||||
const defaultQuery = `SELECT
|
||||
$__time(time_column),
|
||||
value1
|
||||
FROM
|
||||
metric_table
|
||||
WHERE
|
||||
$__timeFilter(time_column)
|
||||
`;
|
||||
|
||||
export class PostgresQueryCtrl extends QueryCtrl {
|
||||
static templateUrl = 'partials/query.editor.html';
|
||||
|
||||
showLastQuerySQL: boolean;
|
||||
formats: any[];
|
||||
target: PostgresQuery;
|
||||
lastQueryMeta: QueryMeta;
|
||||
lastQueryError: string;
|
||||
showHelp: boolean;
|
||||
|
||||
/** @ngInject **/
|
||||
constructor($scope, $injector) {
|
||||
super($scope, $injector);
|
||||
|
||||
this.target.format = this.target.format || 'time_series';
|
||||
this.target.alias = "";
|
||||
this.formats = [
|
||||
{text: 'Time series', value: 'time_series'},
|
||||
{text: 'Table', value: 'table'},
|
||||
];
|
||||
|
||||
if (!this.target.rawSql) {
|
||||
|
||||
// special handling when in table panel
|
||||
if (this.panelCtrl.panel.type === 'table') {
|
||||
this.target.format = 'table';
|
||||
this.target.rawSql = "SELECT 1";
|
||||
} else {
|
||||
this.target.rawSql = defaultQuery;
|
||||
}
|
||||
}
|
||||
|
||||
this.panelCtrl.events.on('data-received', this.onDataReceived.bind(this), $scope);
|
||||
this.panelCtrl.events.on('data-error', this.onDataError.bind(this), $scope);
|
||||
}
|
||||
|
||||
onDataReceived(dataList) {
|
||||
this.lastQueryMeta = null;
|
||||
this.lastQueryError = null;
|
||||
|
||||
let anySeriesFromQuery = _.find(dataList, {refId: this.target.refId});
|
||||
if (anySeriesFromQuery) {
|
||||
this.lastQueryMeta = anySeriesFromQuery.meta;
|
||||
}
|
||||
}
|
||||
|
||||
onDataError(err) {
|
||||
if (err.data && err.data.results) {
|
||||
let queryRes = err.data.results[this.target.refId];
|
||||
if (queryRes) {
|
||||
this.lastQueryMeta = queryRes.meta;
|
||||
this.lastQueryError = queryRes.error;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
141
public/app/plugins/datasource/postgres/response_parser.ts
Normal file
141
public/app/plugins/datasource/postgres/response_parser.ts
Normal file
@ -0,0 +1,141 @@
|
||||
///<reference path="../../../headers/common.d.ts" />
|
||||
|
||||
import _ from 'lodash';
|
||||
|
||||
export default class ResponseParser {
|
||||
constructor(private $q) {}
|
||||
|
||||
processQueryResult(res) {
|
||||
var data = [];
|
||||
|
||||
if (!res.data.results) {
|
||||
return {data: data};
|
||||
}
|
||||
|
||||
for (let key in res.data.results) {
|
||||
let queryRes = res.data.results[key];
|
||||
|
||||
if (queryRes.series) {
|
||||
for (let series of queryRes.series) {
|
||||
data.push({
|
||||
target: series.name,
|
||||
datapoints: series.points,
|
||||
refId: queryRes.refId,
|
||||
meta: queryRes.meta,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (queryRes.tables) {
|
||||
for (let table of queryRes.tables) {
|
||||
table.type = 'table';
|
||||
table.refId = queryRes.refId;
|
||||
table.meta = queryRes.meta;
|
||||
data.push(table);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {data: data};
|
||||
}
|
||||
|
||||
parseMetricFindQueryResult(refId, results) {
|
||||
if (!results || results.data.length === 0 || results.data.results[refId].meta.rowCount === 0) { return []; }
|
||||
|
||||
const columns = results.data.results[refId].tables[0].columns;
|
||||
const rows = results.data.results[refId].tables[0].rows;
|
||||
const textColIndex = this.findColIndex(columns, '__text');
|
||||
const valueColIndex = this.findColIndex(columns, '__value');
|
||||
|
||||
if (columns.length === 2 && textColIndex !== -1 && valueColIndex !== -1) {
|
||||
return this.transformToKeyValueList(rows, textColIndex, valueColIndex);
|
||||
}
|
||||
|
||||
return this.transformToSimpleList(rows);
|
||||
}
|
||||
|
||||
transformToKeyValueList(rows, textColIndex, valueColIndex) {
|
||||
const res = [];
|
||||
|
||||
for (let i = 0; i < rows.length; i++) {
|
||||
if (!this.containsKey(res, rows[i][textColIndex])) {
|
||||
res.push({text: rows[i][textColIndex], value: rows[i][valueColIndex]});
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
transformToSimpleList(rows) {
|
||||
const res = [];
|
||||
|
||||
for (let i = 0; i < rows.length; i++) {
|
||||
for (let j = 0; j < rows[i].length; j++) {
|
||||
const value = rows[i][j];
|
||||
if ( res.indexOf( value ) === -1 ) {
|
||||
res.push(value);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return _.map(res, value => {
|
||||
return { text: value};
|
||||
});
|
||||
}
|
||||
|
||||
findColIndex(columns, colName) {
|
||||
for (let i = 0; i < columns.length; i++) {
|
||||
if (columns[i].text === colName) {
|
||||
return i;
|
||||
}
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
containsKey(res, key) {
|
||||
for (let i = 0; i < res.length; i++) {
|
||||
if (res[i].text === key) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
transformAnnotationResponse(options, data) {
|
||||
const table = data.data.results[options.annotation.name].tables[0];
|
||||
|
||||
let timeColumnIndex = -1;
|
||||
let titleColumnIndex = -1;
|
||||
let textColumnIndex = -1;
|
||||
let tagsColumnIndex = -1;
|
||||
|
||||
for (let i = 0; i < table.columns.length; i++) {
|
||||
if (table.columns[i].text === 'time') {
|
||||
timeColumnIndex = i;
|
||||
} else if (table.columns[i].text === 'text') {
|
||||
textColumnIndex = i;
|
||||
} else if (table.columns[i].text === 'tags') {
|
||||
tagsColumnIndex = i;
|
||||
}
|
||||
}
|
||||
|
||||
if (timeColumnIndex === -1) {
|
||||
return this.$q.reject({message: 'Missing mandatory time column in annotation query.'});
|
||||
}
|
||||
|
||||
const list = [];
|
||||
for (let i = 0; i < table.rows.length; i++) {
|
||||
const row = table.rows[i];
|
||||
list.push({
|
||||
annotation: options.annotation,
|
||||
time: Math.floor(row[timeColumnIndex]) * 1000,
|
||||
title: row[titleColumnIndex],
|
||||
text: row[textColumnIndex],
|
||||
tags: row[tagsColumnIndex] ? row[tagsColumnIndex].trim().split(/\s*,\s*/) : []
|
||||
});
|
||||
}
|
||||
|
||||
return list;
|
||||
}
|
||||
}
|
196
public/app/plugins/datasource/postgres/specs/datasource_specs.ts
Normal file
196
public/app/plugins/datasource/postgres/specs/datasource_specs.ts
Normal file
@ -0,0 +1,196 @@
|
||||
import {describe, beforeEach, it, expect, angularMocks} from 'test/lib/common';
|
||||
import moment from 'moment';
|
||||
import helpers from 'test/specs/helpers';
|
||||
import {PostgresDatasource} from '../datasource';
|
||||
|
||||
describe('PostgreSQLDatasource', function() {
|
||||
var ctx = new helpers.ServiceTestContext();
|
||||
var instanceSettings = {name: 'postgresql'};
|
||||
|
||||
beforeEach(angularMocks.module('grafana.core'));
|
||||
beforeEach(angularMocks.module('grafana.services'));
|
||||
beforeEach(ctx.providePhase(['backendSrv']));
|
||||
|
||||
beforeEach(angularMocks.inject(function($q, $rootScope, $httpBackend, $injector) {
|
||||
ctx.$q = $q;
|
||||
ctx.$httpBackend = $httpBackend;
|
||||
ctx.$rootScope = $rootScope;
|
||||
ctx.ds = $injector.instantiate(PostgresDatasource, {instanceSettings: instanceSettings});
|
||||
$httpBackend.when('GET', /\.html$/).respond('');
|
||||
}));
|
||||
|
||||
describe('When performing annotationQuery', function() {
|
||||
let results;
|
||||
|
||||
const annotationName = 'MyAnno';
|
||||
|
||||
const options = {
|
||||
annotation: {
|
||||
name: annotationName,
|
||||
rawQuery: 'select time, title, text, tags from table;'
|
||||
},
|
||||
range: {
|
||||
from: moment(1432288354),
|
||||
to: moment(1432288401)
|
||||
}
|
||||
};
|
||||
|
||||
const response = {
|
||||
results: {
|
||||
MyAnno: {
|
||||
refId: annotationName,
|
||||
tables: [
|
||||
{
|
||||
columns: [{text: 'time'}, {text: 'text'}, {text: 'tags'}],
|
||||
rows: [
|
||||
[1432288355, 'some text', 'TagA,TagB'],
|
||||
[1432288390, 'some text2', ' TagB , TagC'],
|
||||
[1432288400, 'some text3']
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(function() {
|
||||
ctx.backendSrv.datasourceRequest = function(options) {
|
||||
return ctx.$q.when({data: response, status: 200});
|
||||
};
|
||||
ctx.ds.annotationQuery(options).then(function(data) { results = data; });
|
||||
ctx.$rootScope.$apply();
|
||||
});
|
||||
|
||||
it('should return annotation list', function() {
|
||||
expect(results.length).to.be(3);
|
||||
|
||||
expect(results[0].text).to.be('some text');
|
||||
expect(results[0].tags[0]).to.be('TagA');
|
||||
expect(results[0].tags[1]).to.be('TagB');
|
||||
|
||||
expect(results[1].tags[0]).to.be('TagB');
|
||||
expect(results[1].tags[1]).to.be('TagC');
|
||||
|
||||
expect(results[2].tags.length).to.be(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('When performing metricFindQuery', function() {
|
||||
let results;
|
||||
const query = 'select * from atable';
|
||||
const response = {
|
||||
results: {
|
||||
tempvar: {
|
||||
meta: {
|
||||
rowCount: 3
|
||||
},
|
||||
refId: 'tempvar',
|
||||
tables: [
|
||||
{
|
||||
columns: [{text: 'title'}, {text: 'text'}],
|
||||
rows: [
|
||||
['aTitle', 'some text'],
|
||||
['aTitle2', 'some text2'],
|
||||
['aTitle3', 'some text3']
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(function() {
|
||||
ctx.backendSrv.datasourceRequest = function(options) {
|
||||
return ctx.$q.when({data: response, status: 200});
|
||||
};
|
||||
ctx.ds.metricFindQuery(query).then(function(data) { results = data; });
|
||||
ctx.$rootScope.$apply();
|
||||
});
|
||||
|
||||
it('should return list of all column values', function() {
|
||||
expect(results.length).to.be(6);
|
||||
expect(results[0].text).to.be('aTitle');
|
||||
expect(results[5].text).to.be('some text3');
|
||||
});
|
||||
});
|
||||
|
||||
describe('When performing metricFindQuery with key, value columns', function() {
|
||||
let results;
|
||||
const query = 'select * from atable';
|
||||
const response = {
|
||||
results: {
|
||||
tempvar: {
|
||||
meta: {
|
||||
rowCount: 3
|
||||
},
|
||||
refId: 'tempvar',
|
||||
tables: [
|
||||
{
|
||||
columns: [{text: '__value'}, {text: '__text'}],
|
||||
rows: [
|
||||
['value1', 'aTitle'],
|
||||
['value2', 'aTitle2'],
|
||||
['value3', 'aTitle3']
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(function() {
|
||||
ctx.backendSrv.datasourceRequest = function(options) {
|
||||
return ctx.$q.when({data: response, status: 200});
|
||||
};
|
||||
ctx.ds.metricFindQuery(query).then(function(data) { results = data; });
|
||||
ctx.$rootScope.$apply();
|
||||
});
|
||||
|
||||
it('should return list of as text, value', function() {
|
||||
expect(results.length).to.be(3);
|
||||
expect(results[0].text).to.be('aTitle');
|
||||
expect(results[0].value).to.be('value1');
|
||||
expect(results[2].text).to.be('aTitle3');
|
||||
expect(results[2].value).to.be('value3');
|
||||
});
|
||||
});
|
||||
|
||||
describe('When performing metricFindQuery with key, value columns and with duplicate keys', function() {
|
||||
let results;
|
||||
const query = 'select * from atable';
|
||||
const response = {
|
||||
results: {
|
||||
tempvar: {
|
||||
meta: {
|
||||
rowCount: 3
|
||||
},
|
||||
refId: 'tempvar',
|
||||
tables: [
|
||||
{
|
||||
columns: [{text: '__text'}, {text: '__value'}],
|
||||
rows: [
|
||||
['aTitle', 'same'],
|
||||
['aTitle', 'same'],
|
||||
['aTitle', 'diff']
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(function() {
|
||||
ctx.backendSrv.datasourceRequest = function(options) {
|
||||
return ctx.$q.when({data: response, status: 200});
|
||||
};
|
||||
ctx.ds.metricFindQuery(query).then(function(data) { results = data; });
|
||||
ctx.$rootScope.$apply();
|
||||
});
|
||||
|
||||
it('should return list of unique keys', function() {
|
||||
expect(results.length).to.be(1);
|
||||
expect(results[0].text).to.be('aTitle');
|
||||
expect(results[0].value).to.be('same');
|
||||
});
|
||||
});
|
||||
});
|
@ -606,7 +606,7 @@ class SingleStatCtrl extends MetricsPanelCtrl {
|
||||
|
||||
var body = panel.gauge.show ? '' : getBigValueHtml();
|
||||
|
||||
if (panel.colorBackground && !isNaN(data.value)) {
|
||||
if (panel.colorBackground) {
|
||||
var color = getColorForValue(data, data.value);
|
||||
if (color) {
|
||||
$panelContainer.css('background-color', color);
|
||||
@ -690,6 +690,9 @@ class SingleStatCtrl extends MetricsPanelCtrl {
|
||||
}
|
||||
|
||||
function getColorForValue(data, value) {
|
||||
if (!_.isFinite(value)) {
|
||||
return null;
|
||||
}
|
||||
for (var i = data.thresholds.length; i > 0; i--) {
|
||||
if (value >= data.thresholds[i-1]) {
|
||||
return data.colorMap[i];
|
||||
|
@ -1,278 +0,0 @@
|
||||
define([
|
||||
'app/core/time_series'
|
||||
], function(TimeSeries) {
|
||||
'use strict';
|
||||
|
||||
describe("TimeSeries", function() {
|
||||
var points, series;
|
||||
var yAxisFormats = ['short', 'ms'];
|
||||
var testData;
|
||||
|
||||
beforeEach(function() {
|
||||
testData = {
|
||||
alias: 'test',
|
||||
datapoints: [
|
||||
[1,2],[null,3],[10,4],[8,5]
|
||||
]
|
||||
};
|
||||
});
|
||||
|
||||
describe('when getting flot pairs', function() {
|
||||
it('with connected style, should ignore nulls', function() {
|
||||
series = new TimeSeries(testData);
|
||||
points = series.getFlotPairs('connected', yAxisFormats);
|
||||
expect(points.length).to.be(3);
|
||||
});
|
||||
|
||||
it('with null as zero style, should replace nulls with zero', function() {
|
||||
series = new TimeSeries(testData);
|
||||
points = series.getFlotPairs('null as zero', yAxisFormats);
|
||||
expect(points.length).to.be(4);
|
||||
expect(points[1][1]).to.be(0);
|
||||
});
|
||||
|
||||
it('if last is null current should pick next to last', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[10,1], [null, 2]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.current).to.be(10);
|
||||
});
|
||||
|
||||
it('max value should work for negative values', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[-10,1], [-4, 2]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.max).to.be(-4);
|
||||
});
|
||||
|
||||
it('average value should ignore nulls', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.avg).to.be(6.333333333333333);
|
||||
});
|
||||
|
||||
it('the delta value should account for nulls', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[3,3],[null,4],[10,5],[15,6]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(14);
|
||||
});
|
||||
|
||||
it('the delta value should account for nulls on first', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[null,2],[1,3],[10,4],[15,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(14);
|
||||
});
|
||||
|
||||
it('the delta value should account for nulls on last', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[5,3],[10,4],[null,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(9);
|
||||
});
|
||||
|
||||
it('the delta value should account for resets', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[5,3],[10,4],[0,5],[10,6]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(19);
|
||||
});
|
||||
|
||||
it('the delta value should account for resets on last', function() {
|
||||
series = new TimeSeries({
|
||||
datapoints: [[1,2],[2,3],[10,4],[8,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.delta).to.be(17);
|
||||
});
|
||||
|
||||
it('the range value should be max - min', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.range).to.be(9);
|
||||
});
|
||||
|
||||
it('first value should ingone nulls', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.first).to.be(1);
|
||||
series = new TimeSeries({
|
||||
datapoints: [[null,2],[1,3],[10,4],[8,5]]
|
||||
});
|
||||
series.getFlotPairs('null', yAxisFormats);
|
||||
expect(series.stats.first).to.be(1);
|
||||
});
|
||||
|
||||
it('with null as zero style, average value should treat nulls as 0', function() {
|
||||
series = new TimeSeries(testData);
|
||||
series.getFlotPairs('null as zero', yAxisFormats);
|
||||
expect(series.stats.avg).to.be(4.75);
|
||||
});
|
||||
});
|
||||
|
||||
describe('When checking if ms resolution is needed', function() {
|
||||
describe('msResolution with second resolution timestamps', function() {
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries({datapoints: [[45, 1234567890], [60, 1234567899]]});
|
||||
});
|
||||
|
||||
it('should set hasMsResolution to false', function() {
|
||||
expect(series.hasMsResolution).to.be(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('msResolution with millisecond resolution timestamps', function() {
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries({datapoints: [[55, 1236547890001], [90, 1234456709000]]});
|
||||
});
|
||||
|
||||
it('should show millisecond resolution tooltip', function() {
|
||||
expect(series.hasMsResolution).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('msResolution with millisecond resolution timestamps but with trailing zeroes', function() {
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries({datapoints: [[45, 1234567890000], [60, 1234567899000]]});
|
||||
});
|
||||
|
||||
it('should not show millisecond resolution tooltip', function() {
|
||||
expect(series.hasMsResolution).to.be(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('can detect if series contains ms precision', function() {
|
||||
var fakedata;
|
||||
|
||||
beforeEach(function() {
|
||||
fakedata = testData;
|
||||
});
|
||||
|
||||
it('missing datapoint with ms precision', function() {
|
||||
fakedata.datapoints[0] = [1337, 1234567890000];
|
||||
series = new TimeSeries(fakedata);
|
||||
expect(series.isMsResolutionNeeded()).to.be(false);
|
||||
});
|
||||
|
||||
it('contains datapoint with ms precision', function() {
|
||||
fakedata.datapoints[0] = [1337, 1236547890001];
|
||||
series = new TimeSeries(fakedata);
|
||||
expect(series.isMsResolutionNeeded()).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series overrides', function() {
|
||||
var series;
|
||||
beforeEach(function() {
|
||||
series = new TimeSeries(testData);
|
||||
});
|
||||
|
||||
describe('fill & points', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', fill: 0, points: true }]);
|
||||
});
|
||||
|
||||
it('should set fill zero, and enable points', function() {
|
||||
expect(series.lines.fill).to.be(0.001);
|
||||
expect(series.points.show).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, bars, true & lines false', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', bars: true, lines: false }]);
|
||||
});
|
||||
|
||||
it('should disable lines, and enable bars', function() {
|
||||
expect(series.lines.show).to.be(false);
|
||||
expect(series.bars.show).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, linewidth, stack', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', linewidth: 5, stack: false }]);
|
||||
});
|
||||
|
||||
it('should disable stack, and set lineWidth', function() {
|
||||
expect(series.stack).to.be(false);
|
||||
expect(series.lines.lineWidth).to.be(5);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, dashes and lineWidth', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', linewidth: 5, dashes: true }]);
|
||||
});
|
||||
|
||||
it('should enable dashes, set dashes lineWidth to 5 and lines lineWidth to 0', function() {
|
||||
expect(series.dashes.show).to.be(true);
|
||||
expect(series.dashes.lineWidth).to.be(5);
|
||||
expect(series.lines.lineWidth).to.be(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, fill below to', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', fillBelowTo: 'min' }]);
|
||||
});
|
||||
|
||||
it('should disable line fill and add fillBelowTo', function() {
|
||||
expect(series.fillBelowTo).to.be('min');
|
||||
});
|
||||
});
|
||||
|
||||
describe('series option overrides, pointradius, steppedLine', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', pointradius: 5, steppedLine: true }]);
|
||||
});
|
||||
|
||||
it('should set pointradius, and set steppedLine', function() {
|
||||
expect(series.points.radius).to.be(5);
|
||||
expect(series.lines.steps).to.be(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('override match on regex', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test_01';
|
||||
series.applySeriesOverrides([{ alias: '/.*01/', lines: false }]);
|
||||
});
|
||||
|
||||
it('should match second series', function() {
|
||||
expect(series.lines.show).to.be(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('override series y-axis, and z-index', function() {
|
||||
beforeEach(function() {
|
||||
series.alias = 'test';
|
||||
series.applySeriesOverrides([{ alias: 'test', yaxis: 2, zindex: 2 }]);
|
||||
});
|
||||
|
||||
it('should set yaxis', function() {
|
||||
expect(series.yaxis).to.be(2);
|
||||
});
|
||||
|
||||
it('should set zindex', function() {
|
||||
expect(series.zindex).to.be(2);
|
||||
});
|
||||
});
|
||||
|
||||
});
|
||||
});
|
||||
});
|
6
vendor/github.com/lib/pq/array.go
generated
vendored
6
vendor/github.com/lib/pq/array.go
generated
vendored
@ -13,7 +13,7 @@ import (
|
||||
|
||||
var typeByteSlice = reflect.TypeOf([]byte{})
|
||||
var typeDriverValuer = reflect.TypeOf((*driver.Valuer)(nil)).Elem()
|
||||
var typeSqlScanner = reflect.TypeOf((*sql.Scanner)(nil)).Elem()
|
||||
var typeSQLScanner = reflect.TypeOf((*sql.Scanner)(nil)).Elem()
|
||||
|
||||
// Array returns the optimal driver.Valuer and sql.Scanner for an array or
|
||||
// slice of any dimension.
|
||||
@ -278,7 +278,7 @@ func (GenericArray) evaluateDestination(rt reflect.Type) (reflect.Type, func([]b
|
||||
// TODO calculate the assign function for other types
|
||||
// TODO repeat this section on the element type of arrays or slices (multidimensional)
|
||||
{
|
||||
if reflect.PtrTo(rt).Implements(typeSqlScanner) {
|
||||
if reflect.PtrTo(rt).Implements(typeSQLScanner) {
|
||||
// dest is always addressable because it is an element of a slice.
|
||||
assign = func(src []byte, dest reflect.Value) (err error) {
|
||||
ss := dest.Addr().Interface().(sql.Scanner)
|
||||
@ -587,7 +587,7 @@ func appendArrayElement(b []byte, rv reflect.Value) ([]byte, string, error) {
|
||||
}
|
||||
}
|
||||
|
||||
var del string = ","
|
||||
var del = ","
|
||||
var err error
|
||||
var iv interface{} = rv.Interface()
|
||||
|
||||
|
117
vendor/github.com/lib/pq/conn.go
generated
vendored
117
vendor/github.com/lib/pq/conn.go
generated
vendored
@ -27,12 +27,12 @@ var (
|
||||
ErrNotSupported = errors.New("pq: Unsupported command")
|
||||
ErrInFailedTransaction = errors.New("pq: Could not complete operation in a failed transaction")
|
||||
ErrSSLNotSupported = errors.New("pq: SSL is not enabled on the server")
|
||||
ErrSSLKeyHasWorldPermissions = errors.New("pq: Private key file has group or world access. Permissions should be u=rw (0600) or less.")
|
||||
ErrCouldNotDetectUsername = errors.New("pq: Could not detect default username. Please provide one explicitly.")
|
||||
ErrSSLKeyHasWorldPermissions = errors.New("pq: Private key file has group or world access. Permissions should be u=rw (0600) or less")
|
||||
ErrCouldNotDetectUsername = errors.New("pq: Could not detect default username. Please provide one explicitly")
|
||||
|
||||
errUnexpectedReady = errors.New("unexpected ReadyForQuery")
|
||||
errNoRowsAffected = errors.New("no RowsAffected available after the empty statement")
|
||||
errNoLastInsertId = errors.New("no LastInsertId available after the empty statement")
|
||||
errNoLastInsertID = errors.New("no LastInsertId available after the empty statement")
|
||||
)
|
||||
|
||||
type Driver struct{}
|
||||
@ -131,7 +131,7 @@ type conn struct {
|
||||
}
|
||||
|
||||
// Handle driver-side settings in parsed connection string.
|
||||
func (c *conn) handleDriverSettings(o values) (err error) {
|
||||
func (cn *conn) handleDriverSettings(o values) (err error) {
|
||||
boolSetting := func(key string, val *bool) error {
|
||||
if value, ok := o[key]; ok {
|
||||
if value == "yes" {
|
||||
@ -145,18 +145,18 @@ func (c *conn) handleDriverSettings(o values) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
err = boolSetting("disable_prepared_binary_result", &c.disablePreparedBinaryResult)
|
||||
err = boolSetting("disable_prepared_binary_result", &cn.disablePreparedBinaryResult)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = boolSetting("binary_parameters", &c.binaryParameters)
|
||||
err = boolSetting("binary_parameters", &cn.binaryParameters)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *conn) handlePgpass(o values) {
|
||||
func (cn *conn) handlePgpass(o values) {
|
||||
// if a password was supplied, do not process .pgpass
|
||||
if _, ok := o["password"]; ok {
|
||||
return
|
||||
@ -229,10 +229,10 @@ func (c *conn) handlePgpass(o values) {
|
||||
}
|
||||
}
|
||||
|
||||
func (c *conn) writeBuf(b byte) *writeBuf {
|
||||
c.scratch[0] = b
|
||||
func (cn *conn) writeBuf(b byte) *writeBuf {
|
||||
cn.scratch[0] = b
|
||||
return &writeBuf{
|
||||
buf: c.scratch[:5],
|
||||
buf: cn.scratch[:5],
|
||||
pos: 1,
|
||||
}
|
||||
}
|
||||
@ -310,9 +310,8 @@ func DialOpen(d Dialer, name string) (_ driver.Conn, err error) {
|
||||
u, err := userCurrent()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else {
|
||||
o["user"] = u
|
||||
}
|
||||
o["user"] = u
|
||||
}
|
||||
|
||||
cn := &conn{
|
||||
@ -698,7 +697,7 @@ var emptyRows noRows
|
||||
var _ driver.Result = noRows{}
|
||||
|
||||
func (noRows) LastInsertId() (int64, error) {
|
||||
return 0, errNoLastInsertId
|
||||
return 0, errNoLastInsertID
|
||||
}
|
||||
|
||||
func (noRows) RowsAffected() (int64, error) {
|
||||
@ -707,7 +706,7 @@ func (noRows) RowsAffected() (int64, error) {
|
||||
|
||||
// Decides which column formats to use for a prepared statement. The input is
|
||||
// an array of type oids, one element per result column.
|
||||
func decideColumnFormats(colTyps []oid.Oid, forceText bool) (colFmts []format, colFmtData []byte) {
|
||||
func decideColumnFormats(colTyps []fieldDesc, forceText bool) (colFmts []format, colFmtData []byte) {
|
||||
if len(colTyps) == 0 {
|
||||
return nil, colFmtDataAllText
|
||||
}
|
||||
@ -719,8 +718,8 @@ func decideColumnFormats(colTyps []oid.Oid, forceText bool) (colFmts []format, c
|
||||
|
||||
allBinary := true
|
||||
allText := true
|
||||
for i, o := range colTyps {
|
||||
switch o {
|
||||
for i, t := range colTyps {
|
||||
switch t.OID {
|
||||
// This is the list of types to use binary mode for when receiving them
|
||||
// through a prepared statement. If a type appears in this list, it
|
||||
// must also be implemented in binaryDecode in encode.go.
|
||||
@ -840,16 +839,15 @@ func (cn *conn) query(query string, args []driver.Value) (_ *rows, err error) {
|
||||
rows.colNames, rows.colFmts, rows.colTyps = cn.readPortalDescribeResponse()
|
||||
cn.postExecuteWorkaround()
|
||||
return rows, nil
|
||||
} else {
|
||||
st := cn.prepareTo(query, "")
|
||||
st.exec(args)
|
||||
return &rows{
|
||||
cn: cn,
|
||||
colNames: st.colNames,
|
||||
colTyps: st.colTyps,
|
||||
colFmts: st.colFmts,
|
||||
}, nil
|
||||
}
|
||||
st := cn.prepareTo(query, "")
|
||||
st.exec(args)
|
||||
return &rows{
|
||||
cn: cn,
|
||||
colNames: st.colNames,
|
||||
colTyps: st.colTyps,
|
||||
colFmts: st.colFmts,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Implement the optional "Execer" interface for one-shot queries
|
||||
@ -876,17 +874,16 @@ func (cn *conn) Exec(query string, args []driver.Value) (res driver.Result, err
|
||||
cn.postExecuteWorkaround()
|
||||
res, _, err = cn.readExecuteResponse("Execute")
|
||||
return res, err
|
||||
} else {
|
||||
// Use the unnamed statement to defer planning until bind
|
||||
// time, or else value-based selectivity estimates cannot be
|
||||
// used.
|
||||
st := cn.prepareTo(query, "")
|
||||
r, err := st.Exec(args)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return r, err
|
||||
}
|
||||
// Use the unnamed statement to defer planning until bind
|
||||
// time, or else value-based selectivity estimates cannot be
|
||||
// used.
|
||||
st := cn.prepareTo(query, "")
|
||||
r, err := st.Exec(args)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return r, err
|
||||
}
|
||||
|
||||
func (cn *conn) send(m *writeBuf) {
|
||||
@ -1147,10 +1144,10 @@ const formatText format = 0
|
||||
const formatBinary format = 1
|
||||
|
||||
// One result-column format code with the value 1 (i.e. all binary).
|
||||
var colFmtDataAllBinary []byte = []byte{0, 1, 0, 1}
|
||||
var colFmtDataAllBinary = []byte{0, 1, 0, 1}
|
||||
|
||||
// No result-column format codes (i.e. all text).
|
||||
var colFmtDataAllText []byte = []byte{0, 0}
|
||||
var colFmtDataAllText = []byte{0, 0}
|
||||
|
||||
type stmt struct {
|
||||
cn *conn
|
||||
@ -1158,7 +1155,7 @@ type stmt struct {
|
||||
colNames []string
|
||||
colFmts []format
|
||||
colFmtData []byte
|
||||
colTyps []oid.Oid
|
||||
colTyps []fieldDesc
|
||||
paramTyps []oid.Oid
|
||||
closed bool
|
||||
}
|
||||
@ -1321,7 +1318,7 @@ type rows struct {
|
||||
cn *conn
|
||||
finish func()
|
||||
colNames []string
|
||||
colTyps []oid.Oid
|
||||
colTyps []fieldDesc
|
||||
colFmts []format
|
||||
done bool
|
||||
rb readBuf
|
||||
@ -1409,7 +1406,7 @@ func (rs *rows) Next(dest []driver.Value) (err error) {
|
||||
dest[i] = nil
|
||||
continue
|
||||
}
|
||||
dest[i] = decode(&conn.parameterStatus, rs.rb.next(l), rs.colTyps[i], rs.colFmts[i])
|
||||
dest[i] = decode(&conn.parameterStatus, rs.rb.next(l), rs.colTyps[i].OID, rs.colFmts[i])
|
||||
}
|
||||
return
|
||||
case 'T':
|
||||
@ -1515,7 +1512,7 @@ func (cn *conn) sendBinaryModeQuery(query string, args []driver.Value) {
|
||||
cn.send(b)
|
||||
}
|
||||
|
||||
func (c *conn) processParameterStatus(r *readBuf) {
|
||||
func (cn *conn) processParameterStatus(r *readBuf) {
|
||||
var err error
|
||||
|
||||
param := r.string()
|
||||
@ -1526,13 +1523,13 @@ func (c *conn) processParameterStatus(r *readBuf) {
|
||||
var minor int
|
||||
_, err = fmt.Sscanf(r.string(), "%d.%d.%d", &major1, &major2, &minor)
|
||||
if err == nil {
|
||||
c.parameterStatus.serverVersion = major1*10000 + major2*100 + minor
|
||||
cn.parameterStatus.serverVersion = major1*10000 + major2*100 + minor
|
||||
}
|
||||
|
||||
case "TimeZone":
|
||||
c.parameterStatus.currentLocation, err = time.LoadLocation(r.string())
|
||||
cn.parameterStatus.currentLocation, err = time.LoadLocation(r.string())
|
||||
if err != nil {
|
||||
c.parameterStatus.currentLocation = nil
|
||||
cn.parameterStatus.currentLocation = nil
|
||||
}
|
||||
|
||||
default:
|
||||
@ -1540,8 +1537,8 @@ func (c *conn) processParameterStatus(r *readBuf) {
|
||||
}
|
||||
}
|
||||
|
||||
func (c *conn) processReadyForQuery(r *readBuf) {
|
||||
c.txnStatus = transactionStatus(r.byte())
|
||||
func (cn *conn) processReadyForQuery(r *readBuf) {
|
||||
cn.txnStatus = transactionStatus(r.byte())
|
||||
}
|
||||
|
||||
func (cn *conn) readReadyForQuery() {
|
||||
@ -1556,9 +1553,9 @@ func (cn *conn) readReadyForQuery() {
|
||||
}
|
||||
}
|
||||
|
||||
func (c *conn) processBackendKeyData(r *readBuf) {
|
||||
c.processID = r.int32()
|
||||
c.secretKey = r.int32()
|
||||
func (cn *conn) processBackendKeyData(r *readBuf) {
|
||||
cn.processID = r.int32()
|
||||
cn.secretKey = r.int32()
|
||||
}
|
||||
|
||||
func (cn *conn) readParseResponse() {
|
||||
@ -1576,7 +1573,7 @@ func (cn *conn) readParseResponse() {
|
||||
}
|
||||
}
|
||||
|
||||
func (cn *conn) readStatementDescribeResponse() (paramTyps []oid.Oid, colNames []string, colTyps []oid.Oid) {
|
||||
func (cn *conn) readStatementDescribeResponse() (paramTyps []oid.Oid, colNames []string, colTyps []fieldDesc) {
|
||||
for {
|
||||
t, r := cn.recv1()
|
||||
switch t {
|
||||
@ -1602,7 +1599,7 @@ func (cn *conn) readStatementDescribeResponse() (paramTyps []oid.Oid, colNames [
|
||||
}
|
||||
}
|
||||
|
||||
func (cn *conn) readPortalDescribeResponse() (colNames []string, colFmts []format, colTyps []oid.Oid) {
|
||||
func (cn *conn) readPortalDescribeResponse() (colNames []string, colFmts []format, colTyps []fieldDesc) {
|
||||
t, r := cn.recv1()
|
||||
switch t {
|
||||
case 'T':
|
||||
@ -1698,31 +1695,33 @@ func (cn *conn) readExecuteResponse(protocolState string) (res driver.Result, co
|
||||
}
|
||||
}
|
||||
|
||||
func parseStatementRowDescribe(r *readBuf) (colNames []string, colTyps []oid.Oid) {
|
||||
func parseStatementRowDescribe(r *readBuf) (colNames []string, colTyps []fieldDesc) {
|
||||
n := r.int16()
|
||||
colNames = make([]string, n)
|
||||
colTyps = make([]oid.Oid, n)
|
||||
colTyps = make([]fieldDesc, n)
|
||||
for i := range colNames {
|
||||
colNames[i] = r.string()
|
||||
r.next(6)
|
||||
colTyps[i] = r.oid()
|
||||
r.next(6)
|
||||
colTyps[i].OID = r.oid()
|
||||
colTyps[i].Len = r.int16()
|
||||
colTyps[i].Mod = r.int32()
|
||||
// format code not known when describing a statement; always 0
|
||||
r.next(2)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func parsePortalRowDescribe(r *readBuf) (colNames []string, colFmts []format, colTyps []oid.Oid) {
|
||||
func parsePortalRowDescribe(r *readBuf) (colNames []string, colFmts []format, colTyps []fieldDesc) {
|
||||
n := r.int16()
|
||||
colNames = make([]string, n)
|
||||
colFmts = make([]format, n)
|
||||
colTyps = make([]oid.Oid, n)
|
||||
colTyps = make([]fieldDesc, n)
|
||||
for i := range colNames {
|
||||
colNames[i] = r.string()
|
||||
r.next(6)
|
||||
colTyps[i] = r.oid()
|
||||
r.next(6)
|
||||
colTyps[i].OID = r.oid()
|
||||
colTyps[i].Len = r.int16()
|
||||
colTyps[i].Mod = r.int32()
|
||||
colFmts[i] = format(r.int16())
|
||||
}
|
||||
return
|
||||
|
147
vendor/github.com/lib/pq/hstore/hstore_test.go
generated
vendored
147
vendor/github.com/lib/pq/hstore/hstore_test.go
generated
vendored
@ -1,147 +0,0 @@
|
||||
package hstore
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
_ "github.com/lib/pq"
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type Fatalistic interface {
|
||||
Fatal(args ...interface{})
|
||||
}
|
||||
|
||||
func openTestConn(t Fatalistic) *sql.DB {
|
||||
datname := os.Getenv("PGDATABASE")
|
||||
sslmode := os.Getenv("PGSSLMODE")
|
||||
|
||||
if datname == "" {
|
||||
os.Setenv("PGDATABASE", "pqgotest")
|
||||
}
|
||||
|
||||
if sslmode == "" {
|
||||
os.Setenv("PGSSLMODE", "disable")
|
||||
}
|
||||
|
||||
conn, err := sql.Open("postgres", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
func TestHstore(t *testing.T) {
|
||||
db := openTestConn(t)
|
||||
defer db.Close()
|
||||
|
||||
// quitely create hstore if it doesn't exist
|
||||
_, err := db.Exec("CREATE EXTENSION IF NOT EXISTS hstore")
|
||||
if err != nil {
|
||||
t.Skipf("Skipping hstore tests - hstore extension create failed: %s", err.Error())
|
||||
}
|
||||
|
||||
hs := Hstore{}
|
||||
|
||||
// test for null-valued hstores
|
||||
err = db.QueryRow("SELECT NULL::hstore").Scan(&hs)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if hs.Map != nil {
|
||||
t.Fatalf("expected null map")
|
||||
}
|
||||
|
||||
err = db.QueryRow("SELECT $1::hstore", hs).Scan(&hs)
|
||||
if err != nil {
|
||||
t.Fatalf("re-query null map failed: %s", err.Error())
|
||||
}
|
||||
if hs.Map != nil {
|
||||
t.Fatalf("expected null map")
|
||||
}
|
||||
|
||||
// test for empty hstores
|
||||
err = db.QueryRow("SELECT ''::hstore").Scan(&hs)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if hs.Map == nil {
|
||||
t.Fatalf("expected empty map, got null map")
|
||||
}
|
||||
if len(hs.Map) != 0 {
|
||||
t.Fatalf("expected empty map, got len(map)=%d", len(hs.Map))
|
||||
}
|
||||
|
||||
err = db.QueryRow("SELECT $1::hstore", hs).Scan(&hs)
|
||||
if err != nil {
|
||||
t.Fatalf("re-query empty map failed: %s", err.Error())
|
||||
}
|
||||
if hs.Map == nil {
|
||||
t.Fatalf("expected empty map, got null map")
|
||||
}
|
||||
if len(hs.Map) != 0 {
|
||||
t.Fatalf("expected empty map, got len(map)=%d", len(hs.Map))
|
||||
}
|
||||
|
||||
// a few example maps to test out
|
||||
hsOnePair := Hstore{
|
||||
Map: map[string]sql.NullString{
|
||||
"key1": {"value1", true},
|
||||
},
|
||||
}
|
||||
|
||||
hsThreePairs := Hstore{
|
||||
Map: map[string]sql.NullString{
|
||||
"key1": {"value1", true},
|
||||
"key2": {"value2", true},
|
||||
"key3": {"value3", true},
|
||||
},
|
||||
}
|
||||
|
||||
hsSmorgasbord := Hstore{
|
||||
Map: map[string]sql.NullString{
|
||||
"nullstring": {"NULL", true},
|
||||
"actuallynull": {"", false},
|
||||
"NULL": {"NULL string key", true},
|
||||
"withbracket": {"value>42", true},
|
||||
"withequal": {"value=42", true},
|
||||
`"withquotes1"`: {`this "should" be fine`, true},
|
||||
`"withquotes"2"`: {`this "should\" also be fine`, true},
|
||||
"embedded1": {"value1=>x1", true},
|
||||
"embedded2": {`"value2"=>x2`, true},
|
||||
"withnewlines": {"\n\nvalue\t=>2", true},
|
||||
"<<all sorts of crazy>>": {`this, "should,\" also, => be fine`, true},
|
||||
},
|
||||
}
|
||||
|
||||
// test encoding in query params, then decoding during Scan
|
||||
testBidirectional := func(h Hstore) {
|
||||
err = db.QueryRow("SELECT $1::hstore", h).Scan(&hs)
|
||||
if err != nil {
|
||||
t.Fatalf("re-query %d-pair map failed: %s", len(h.Map), err.Error())
|
||||
}
|
||||
if hs.Map == nil {
|
||||
t.Fatalf("expected %d-pair map, got null map", len(h.Map))
|
||||
}
|
||||
if len(hs.Map) != len(h.Map) {
|
||||
t.Fatalf("expected %d-pair map, got len(map)=%d", len(h.Map), len(hs.Map))
|
||||
}
|
||||
|
||||
for key, val := range hs.Map {
|
||||
otherval, found := h.Map[key]
|
||||
if !found {
|
||||
t.Fatalf(" key '%v' not found in %d-pair map", key, len(h.Map))
|
||||
}
|
||||
if otherval.Valid != val.Valid {
|
||||
t.Fatalf(" value %v <> %v in %d-pair map", otherval, val, len(h.Map))
|
||||
}
|
||||
if otherval.String != val.String {
|
||||
t.Fatalf(" value '%v' <> '%v' in %d-pair map", otherval.String, val.String, len(h.Map))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
testBidirectional(hsOnePair)
|
||||
testBidirectional(hsThreePairs)
|
||||
testBidirectional(hsSmorgasbord)
|
||||
}
|
28
vendor/github.com/lib/pq/listen_example/doc.go
generated
vendored
28
vendor/github.com/lib/pq/listen_example/doc.go
generated
vendored
@ -18,11 +18,11 @@ mechanism to avoid polling the database while waiting for more work to arrive.
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/lib/pq"
|
||||
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/lib/pq"
|
||||
)
|
||||
|
||||
func doWork(db *sql.DB, work int64) {
|
||||
@ -51,21 +51,15 @@ mechanism to avoid polling the database while waiting for more work to arrive.
|
||||
}
|
||||
|
||||
func waitForNotification(l *pq.Listener) {
|
||||
for {
|
||||
select {
|
||||
case <-l.Notify:
|
||||
fmt.Println("received notification, new work available")
|
||||
return
|
||||
case <-time.After(90 * time.Second):
|
||||
go func() {
|
||||
l.Ping()
|
||||
}()
|
||||
// Check if there's more work available, just in case it takes
|
||||
// a while for the Listener to notice connection loss and
|
||||
// reconnect.
|
||||
fmt.Println("received no work for 90 seconds, checking for new work")
|
||||
return
|
||||
}
|
||||
select {
|
||||
case <-l.Notify:
|
||||
fmt.Println("received notification, new work available")
|
||||
case <-time.After(90 * time.Second):
|
||||
go l.Ping()
|
||||
// Check if there's more work available, just in case it takes
|
||||
// a while for the Listener to notice connection loss and
|
||||
// reconnect.
|
||||
fmt.Println("received no work for 90 seconds, checking for new work")
|
||||
}
|
||||
}
|
||||
|
||||
|
74
vendor/github.com/lib/pq/oid/gen.go
generated
vendored
74
vendor/github.com/lib/pq/oid/gen.go
generated
vendored
@ -1,74 +0,0 @@
|
||||
// +build ignore
|
||||
|
||||
// Generate the table of OID values
|
||||
// Run with 'go run gen.go'.
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
|
||||
"database/sql"
|
||||
_ "github.com/lib/pq"
|
||||
)
|
||||
|
||||
func main() {
|
||||
datname := os.Getenv("PGDATABASE")
|
||||
sslmode := os.Getenv("PGSSLMODE")
|
||||
|
||||
if datname == "" {
|
||||
os.Setenv("PGDATABASE", "pqgotest")
|
||||
}
|
||||
|
||||
if sslmode == "" {
|
||||
os.Setenv("PGSSLMODE", "disable")
|
||||
}
|
||||
|
||||
db, err := sql.Open("postgres", "")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
cmd := exec.Command("gofmt")
|
||||
cmd.Stderr = os.Stderr
|
||||
w, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
f, err := os.Create("types.go")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
cmd.Stdout = f
|
||||
err = cmd.Start()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Fprintln(w, "// generated by 'go run gen.go'; do not edit")
|
||||
fmt.Fprintln(w, "\npackage oid")
|
||||
fmt.Fprintln(w, "const (")
|
||||
rows, err := db.Query(`
|
||||
SELECT typname, oid
|
||||
FROM pg_type WHERE oid < 10000
|
||||
ORDER BY oid;
|
||||
`)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
var name string
|
||||
var oid int
|
||||
for rows.Next() {
|
||||
err = rows.Scan(&name, &oid)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Fprintf(w, "T_%s Oid = %d\n", name, oid)
|
||||
}
|
||||
if err = rows.Err(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fmt.Fprintln(w, ")")
|
||||
w.Close()
|
||||
cmd.Wait()
|
||||
}
|
184
vendor/github.com/lib/pq/oid/types.go
generated
vendored
184
vendor/github.com/lib/pq/oid/types.go
generated
vendored
@ -1,4 +1,4 @@
|
||||
// generated by 'go run gen.go'; do not edit
|
||||
// Code generated by gen.go. DO NOT EDIT.
|
||||
|
||||
package oid
|
||||
|
||||
@ -18,6 +18,7 @@ const (
|
||||
T_xid Oid = 28
|
||||
T_cid Oid = 29
|
||||
T_oidvector Oid = 30
|
||||
T_pg_ddl_command Oid = 32
|
||||
T_pg_type Oid = 71
|
||||
T_pg_attribute Oid = 75
|
||||
T_pg_proc Oid = 81
|
||||
@ -28,6 +29,7 @@ const (
|
||||
T_pg_node_tree Oid = 194
|
||||
T__json Oid = 199
|
||||
T_smgr Oid = 210
|
||||
T_index_am_handler Oid = 325
|
||||
T_point Oid = 600
|
||||
T_lseg Oid = 601
|
||||
T_path Oid = 602
|
||||
@ -133,6 +135,9 @@ const (
|
||||
T__uuid Oid = 2951
|
||||
T_txid_snapshot Oid = 2970
|
||||
T_fdw_handler Oid = 3115
|
||||
T_pg_lsn Oid = 3220
|
||||
T__pg_lsn Oid = 3221
|
||||
T_tsm_handler Oid = 3310
|
||||
T_anyenum Oid = 3500
|
||||
T_tsvector Oid = 3614
|
||||
T_tsquery Oid = 3615
|
||||
@ -144,6 +149,8 @@ const (
|
||||
T__regconfig Oid = 3735
|
||||
T_regdictionary Oid = 3769
|
||||
T__regdictionary Oid = 3770
|
||||
T_jsonb Oid = 3802
|
||||
T__jsonb Oid = 3807
|
||||
T_anyrange Oid = 3831
|
||||
T_event_trigger Oid = 3838
|
||||
T_int4range Oid = 3904
|
||||
@ -158,4 +165,179 @@ const (
|
||||
T__daterange Oid = 3913
|
||||
T_int8range Oid = 3926
|
||||
T__int8range Oid = 3927
|
||||
T_pg_shseclabel Oid = 4066
|
||||
T_regnamespace Oid = 4089
|
||||
T__regnamespace Oid = 4090
|
||||
T_regrole Oid = 4096
|
||||
T__regrole Oid = 4097
|
||||
)
|
||||
|
||||
var TypeName = map[Oid]string{
|
||||
T_bool: "BOOL",
|
||||
T_bytea: "BYTEA",
|
||||
T_char: "CHAR",
|
||||
T_name: "NAME",
|
||||
T_int8: "INT8",
|
||||
T_int2: "INT2",
|
||||
T_int2vector: "INT2VECTOR",
|
||||
T_int4: "INT4",
|
||||
T_regproc: "REGPROC",
|
||||
T_text: "TEXT",
|
||||
T_oid: "OID",
|
||||
T_tid: "TID",
|
||||
T_xid: "XID",
|
||||
T_cid: "CID",
|
||||
T_oidvector: "OIDVECTOR",
|
||||
T_pg_ddl_command: "PG_DDL_COMMAND",
|
||||
T_pg_type: "PG_TYPE",
|
||||
T_pg_attribute: "PG_ATTRIBUTE",
|
||||
T_pg_proc: "PG_PROC",
|
||||
T_pg_class: "PG_CLASS",
|
||||
T_json: "JSON",
|
||||
T_xml: "XML",
|
||||
T__xml: "_XML",
|
||||
T_pg_node_tree: "PG_NODE_TREE",
|
||||
T__json: "_JSON",
|
||||
T_smgr: "SMGR",
|
||||
T_index_am_handler: "INDEX_AM_HANDLER",
|
||||
T_point: "POINT",
|
||||
T_lseg: "LSEG",
|
||||
T_path: "PATH",
|
||||
T_box: "BOX",
|
||||
T_polygon: "POLYGON",
|
||||
T_line: "LINE",
|
||||
T__line: "_LINE",
|
||||
T_cidr: "CIDR",
|
||||
T__cidr: "_CIDR",
|
||||
T_float4: "FLOAT4",
|
||||
T_float8: "FLOAT8",
|
||||
T_abstime: "ABSTIME",
|
||||
T_reltime: "RELTIME",
|
||||
T_tinterval: "TINTERVAL",
|
||||
T_unknown: "UNKNOWN",
|
||||
T_circle: "CIRCLE",
|
||||
T__circle: "_CIRCLE",
|
||||
T_money: "MONEY",
|
||||
T__money: "_MONEY",
|
||||
T_macaddr: "MACADDR",
|
||||
T_inet: "INET",
|
||||
T__bool: "_BOOL",
|
||||
T__bytea: "_BYTEA",
|
||||
T__char: "_CHAR",
|
||||
T__name: "_NAME",
|
||||
T__int2: "_INT2",
|
||||
T__int2vector: "_INT2VECTOR",
|
||||
T__int4: "_INT4",
|
||||
T__regproc: "_REGPROC",
|
||||
T__text: "_TEXT",
|
||||
T__tid: "_TID",
|
||||
T__xid: "_XID",
|
||||
T__cid: "_CID",
|
||||
T__oidvector: "_OIDVECTOR",
|
||||
T__bpchar: "_BPCHAR",
|
||||
T__varchar: "_VARCHAR",
|
||||
T__int8: "_INT8",
|
||||
T__point: "_POINT",
|
||||
T__lseg: "_LSEG",
|
||||
T__path: "_PATH",
|
||||
T__box: "_BOX",
|
||||
T__float4: "_FLOAT4",
|
||||
T__float8: "_FLOAT8",
|
||||
T__abstime: "_ABSTIME",
|
||||
T__reltime: "_RELTIME",
|
||||
T__tinterval: "_TINTERVAL",
|
||||
T__polygon: "_POLYGON",
|
||||
T__oid: "_OID",
|
||||
T_aclitem: "ACLITEM",
|
||||
T__aclitem: "_ACLITEM",
|
||||
T__macaddr: "_MACADDR",
|
||||
T__inet: "_INET",
|
||||
T_bpchar: "BPCHAR",
|
||||
T_varchar: "VARCHAR",
|
||||
T_date: "DATE",
|
||||
T_time: "TIME",
|
||||
T_timestamp: "TIMESTAMP",
|
||||
T__timestamp: "_TIMESTAMP",
|
||||
T__date: "_DATE",
|
||||
T__time: "_TIME",
|
||||
T_timestamptz: "TIMESTAMPTZ",
|
||||
T__timestamptz: "_TIMESTAMPTZ",
|
||||
T_interval: "INTERVAL",
|
||||
T__interval: "_INTERVAL",
|
||||
T__numeric: "_NUMERIC",
|
||||
T_pg_database: "PG_DATABASE",
|
||||
T__cstring: "_CSTRING",
|
||||
T_timetz: "TIMETZ",
|
||||
T__timetz: "_TIMETZ",
|
||||
T_bit: "BIT",
|
||||
T__bit: "_BIT",
|
||||
T_varbit: "VARBIT",
|
||||
T__varbit: "_VARBIT",
|
||||
T_numeric: "NUMERIC",
|
||||
T_refcursor: "REFCURSOR",
|
||||
T__refcursor: "_REFCURSOR",
|
||||
T_regprocedure: "REGPROCEDURE",
|
||||
T_regoper: "REGOPER",
|
||||
T_regoperator: "REGOPERATOR",
|
||||
T_regclass: "REGCLASS",
|
||||
T_regtype: "REGTYPE",
|
||||
T__regprocedure: "_REGPROCEDURE",
|
||||
T__regoper: "_REGOPER",
|
||||
T__regoperator: "_REGOPERATOR",
|
||||
T__regclass: "_REGCLASS",
|
||||
T__regtype: "_REGTYPE",
|
||||
T_record: "RECORD",
|
||||
T_cstring: "CSTRING",
|
||||
T_any: "ANY",
|
||||
T_anyarray: "ANYARRAY",
|
||||
T_void: "VOID",
|
||||
T_trigger: "TRIGGER",
|
||||
T_language_handler: "LANGUAGE_HANDLER",
|
||||
T_internal: "INTERNAL",
|
||||
T_opaque: "OPAQUE",
|
||||
T_anyelement: "ANYELEMENT",
|
||||
T__record: "_RECORD",
|
||||
T_anynonarray: "ANYNONARRAY",
|
||||
T_pg_authid: "PG_AUTHID",
|
||||
T_pg_auth_members: "PG_AUTH_MEMBERS",
|
||||
T__txid_snapshot: "_TXID_SNAPSHOT",
|
||||
T_uuid: "UUID",
|
||||
T__uuid: "_UUID",
|
||||
T_txid_snapshot: "TXID_SNAPSHOT",
|
||||
T_fdw_handler: "FDW_HANDLER",
|
||||
T_pg_lsn: "PG_LSN",
|
||||
T__pg_lsn: "_PG_LSN",
|
||||
T_tsm_handler: "TSM_HANDLER",
|
||||
T_anyenum: "ANYENUM",
|
||||
T_tsvector: "TSVECTOR",
|
||||
T_tsquery: "TSQUERY",
|
||||
T_gtsvector: "GTSVECTOR",
|
||||
T__tsvector: "_TSVECTOR",
|
||||
T__gtsvector: "_GTSVECTOR",
|
||||
T__tsquery: "_TSQUERY",
|
||||
T_regconfig: "REGCONFIG",
|
||||
T__regconfig: "_REGCONFIG",
|
||||
T_regdictionary: "REGDICTIONARY",
|
||||
T__regdictionary: "_REGDICTIONARY",
|
||||
T_jsonb: "JSONB",
|
||||
T__jsonb: "_JSONB",
|
||||
T_anyrange: "ANYRANGE",
|
||||
T_event_trigger: "EVENT_TRIGGER",
|
||||
T_int4range: "INT4RANGE",
|
||||
T__int4range: "_INT4RANGE",
|
||||
T_numrange: "NUMRANGE",
|
||||
T__numrange: "_NUMRANGE",
|
||||
T_tsrange: "TSRANGE",
|
||||
T__tsrange: "_TSRANGE",
|
||||
T_tstzrange: "TSTZRANGE",
|
||||
T__tstzrange: "_TSTZRANGE",
|
||||
T_daterange: "DATERANGE",
|
||||
T__daterange: "_DATERANGE",
|
||||
T_int8range: "INT8RANGE",
|
||||
T__int8range: "_INT8RANGE",
|
||||
T_pg_shseclabel: "PG_SHSECLABEL",
|
||||
T_regnamespace: "REGNAMESPACE",
|
||||
T__regnamespace: "_REGNAMESPACE",
|
||||
T_regrole: "REGROLE",
|
||||
T__regrole: "_REGROLE",
|
||||
}
|
||||
|
93
vendor/github.com/lib/pq/rows.go
generated
vendored
Normal file
93
vendor/github.com/lib/pq/rows.go
generated
vendored
Normal file
@ -0,0 +1,93 @@
|
||||
package pq
|
||||
|
||||
import (
|
||||
"math"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"github.com/lib/pq/oid"
|
||||
)
|
||||
|
||||
const headerSize = 4
|
||||
|
||||
type fieldDesc struct {
|
||||
// The object ID of the data type.
|
||||
OID oid.Oid
|
||||
// The data type size (see pg_type.typlen).
|
||||
// Note that negative values denote variable-width types.
|
||||
Len int
|
||||
// The type modifier (see pg_attribute.atttypmod).
|
||||
// The meaning of the modifier is type-specific.
|
||||
Mod int
|
||||
}
|
||||
|
||||
func (fd fieldDesc) Type() reflect.Type {
|
||||
switch fd.OID {
|
||||
case oid.T_int8:
|
||||
return reflect.TypeOf(int64(0))
|
||||
case oid.T_int4:
|
||||
return reflect.TypeOf(int32(0))
|
||||
case oid.T_int2:
|
||||
return reflect.TypeOf(int16(0))
|
||||
case oid.T_varchar, oid.T_text:
|
||||
return reflect.TypeOf("")
|
||||
case oid.T_bool:
|
||||
return reflect.TypeOf(false)
|
||||
case oid.T_date, oid.T_time, oid.T_timetz, oid.T_timestamp, oid.T_timestamptz:
|
||||
return reflect.TypeOf(time.Time{})
|
||||
case oid.T_bytea:
|
||||
return reflect.TypeOf([]byte(nil))
|
||||
default:
|
||||
return reflect.TypeOf(new(interface{})).Elem()
|
||||
}
|
||||
}
|
||||
|
||||
func (fd fieldDesc) Name() string {
|
||||
return oid.TypeName[fd.OID]
|
||||
}
|
||||
|
||||
func (fd fieldDesc) Length() (length int64, ok bool) {
|
||||
switch fd.OID {
|
||||
case oid.T_text, oid.T_bytea:
|
||||
return math.MaxInt64, true
|
||||
case oid.T_varchar, oid.T_bpchar:
|
||||
return int64(fd.Mod - headerSize), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
func (fd fieldDesc) PrecisionScale() (precision, scale int64, ok bool) {
|
||||
switch fd.OID {
|
||||
case oid.T_numeric, oid.T__numeric:
|
||||
mod := fd.Mod - headerSize
|
||||
precision = int64((mod >> 16) & 0xffff)
|
||||
scale = int64(mod & 0xffff)
|
||||
return precision, scale, true
|
||||
default:
|
||||
return 0, 0, false
|
||||
}
|
||||
}
|
||||
|
||||
// ColumnTypeScanType returns the value type that can be used to scan types into.
|
||||
func (rs *rows) ColumnTypeScanType(index int) reflect.Type {
|
||||
return rs.colTyps[index].Type()
|
||||
}
|
||||
|
||||
// ColumnTypeDatabaseTypeName return the database system type name.
|
||||
func (rs *rows) ColumnTypeDatabaseTypeName(index int) string {
|
||||
return rs.colTyps[index].Name()
|
||||
}
|
||||
|
||||
// ColumnTypeLength returns the length of the column type if the column is a
|
||||
// variable length type. If the column is not a variable length type ok
|
||||
// should return false.
|
||||
func (rs *rows) ColumnTypeLength(index int) (length int64, ok bool) {
|
||||
return rs.colTyps[index].Length()
|
||||
}
|
||||
|
||||
// ColumnTypePrecisionScale should return the precision and scale for decimal
|
||||
// types. If not applicable, ok should be false.
|
||||
func (rs *rows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool) {
|
||||
return rs.colTyps[index].PrecisionScale()
|
||||
}
|
27
vendor/vendor.json
vendored
27
vendor/vendor.json
vendored
@ -461,10 +461,31 @@
|
||||
"revisionTime": "2017-02-10T14:05:23Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "ZAj/o03zG8Ui4mZ4XmzU4yyKC04=",
|
||||
"checksumSHA1": "RYMOEINLFNWIJk8aKNifSlPhg9U=",
|
||||
"path": "github.com/lib/pq",
|
||||
"revision": "dd1fe2071026ce53f36a39112e645b4d4f5793a4",
|
||||
"revisionTime": "2017-07-07T05:36:02Z"
|
||||
"revision": "23da1db4f16d9658a86ae9b717c245fc078f10f1",
|
||||
"revisionTime": "2017-09-18T17:50:43Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "jaCQF1par6Jl8g+V2Cgp0n/0wSc=",
|
||||
"origin": "github.com/grafana/grafana/vendor/github.com/lib/pq/hstore",
|
||||
"path": "github.com/lib/pq/hstore",
|
||||
"revision": "23da1db4f16d9658a86ae9b717c245fc078f10f1",
|
||||
"revisionTime": "2017-09-18T17:50:43Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "mJHrY33tDs2MRhHt+XunkRF/5ek=",
|
||||
"origin": "github.com/grafana/grafana/vendor/github.com/lib/pq/listen_example",
|
||||
"path": "github.com/lib/pq/listen_example",
|
||||
"revision": "23da1db4f16d9658a86ae9b717c245fc078f10f1",
|
||||
"revisionTime": "2017-09-18T17:50:43Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "AU3fA8Sm33Vj9PBoRPSeYfxLRuE=",
|
||||
"origin": "github.com/grafana/grafana/vendor/github.com/lib/pq/oid",
|
||||
"path": "github.com/lib/pq/oid",
|
||||
"revision": "23da1db4f16d9658a86ae9b717c245fc078f10f1",
|
||||
"revisionTime": "2017-09-18T17:50:43Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "bKMZjd2wPw13VwoE7mBeSv5djFA=",
|
||||
|
Loading…
Reference in New Issue
Block a user