K8s: Refactor config/options for aggregation (#81739)

This commit is contained in:
Todd Treece
2024-02-01 17:27:30 -05:00
committed by GitHub
parent 7a17963ab9
commit 67b6be5515
93 changed files with 1104 additions and 1448 deletions

View File

@@ -0,0 +1,79 @@
# Grafana Kubernetes compatible API Server
## Basic Setup
```ini
[feature_toggles]
grafanaAPIServer = true
kubernetesPlaylists = true
```
Start Grafana:
```bash
make run
```
## Enable dual write to `etcd`
Start `etcd`:
```bash
make devenv sources=etcd
```
Set storage type and etcd server address in `custom.ini`:
```ini
[grafana-apiserver]
storage_type = etcd
etcd_servers = 127.0.0.1:2379
```
## Enable dual write to JSON files:
Set storage type:
```ini
[grafana-apiserver]
storage_type = file
```
Objects will be written to disk under the `{data.path}/grafana-apiserver/` directory.
For example:
```
data/grafana-apiserver
├── grafana.kubeconfig
└── playlist.grafana.app
└── playlists
└── default
└── hi.json
```
### `kubectl` access
For kubectl to work, grafana needs to run over https. To simplify development, you can use:
```ini
app_mode = development
[feature_toggles]
grafanaAPIServer = true
grafanaAPIServerEnsureKubectlAccess = true
kubernetesPlaylists = true
```
This will create a development kubeconfig and start a parallel ssl listener. It can be registered by
navigating to the root grafana folder, then running:
```bash
export KUBECONFIG=$PWD/data/grafana-apiserver/grafana.kubeconfig
kubectl api-resources
```
### Grafana API Access
The Kubernetes compatible API can be accessed using existing Grafana AuthN at: [http://localhost:3000/apis](http://localhost:3000/apis).
The equivalent openapi docs can be seen in [http://localhost:3000/swagger](http://localhost:3000/swagger),
select the relevant API from the dropdown in the upper right.

View File

@@ -0,0 +1,47 @@
# aggregator
This is a package that is intended to power the aggregation of microservices within Grafana. The concept
as well as implementation is largely borrowed from [kube-aggregator](https://github.com/kubernetes/kube-aggregator).
## Why aggregate services?
Grafana's future architecture will entail the same API Server design as that of Kubernetes API Servers. API Servers
provide a standard way of stitching together API Groups through discovery and shared routing patterns that allows
them to aggregate to a parent API Server in a seamless manner. Since we desire to break Grafana monolith up into
more functionally divided microservices, aggregation does the job of still being able to provide these services
under a single address. Other benefits of aggregation include free health checks and being able to independently
roll out features for each service without downtime.
To read more about the concept, see
[here](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-extension-api-server/).
Note that, this aggregation will be a totally internal detail to Grafana. External fully functional APIServers that
may themselves act as parent API Servers to Grafana will never be made aware of them. Any of the `APIService`
related to Grafana Groups registered in a real K8s environment will take the address of Grafana's
parent server (which will bundle grafana-aggregator).
### kube-aggregator versus grafana-aggregator
The `grafana-aggregator` component will work similarly to how `kube-aggregator` works for `kube-apiserver`, the major
difference being that it doesn't require core V1 APIs such as `Service`. Early on, we decided to not have core V1
APIs in the root Grafana API Server. In order to still be able to implement aggregation, we do the following in this Go
package:
1. We do not start the core shared informer factories as well as any default controllers that utilize them.
This is achieved using `DisabledPostStartHooks` facility under the GenericAPIServer's RecommendedConfig.
2. We provide an `externalname` Kind API implementation under `service.grafana.app` group which works functionally
equivalent to the idea with the same name under `core/v1/Service`.
3. Lastly, we swap the default available condition controller with the custom one written by us. This one is based on
our `externalname` (`service.grafana.app`) implementation. We register separate `PostStartHooks`
using `AddPostStartHookOrDie` on the GenericAPIServer to start the corresponding custom controller as well as
requisite informer factories for our own `externalname` Kind.
4. For now, we bundle apiextensions-apiserver under our aggregator component. This is slightly different from K8s
where kube-apiserver is called the top-level component and controlplane, aggregator and apiextensions-apiserver
live under that instead.
### Gotchas (Pay Attention)
1. `grafana-aggregator` uses file storage under `data/grafana-aggregator` (`apiregistration.k8s.io`,
`service.grafana.app`) and `data/grafana-apiextensions` (`apiextensions.k8s.io`).
2. Since `grafana-aggregator` outputs configuration (TLS and kubeconfig) that is used in the invocation of aggregated
servers, ensure you start the aggregated service after launching the aggregator during local development.

View File

@@ -0,0 +1,285 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/aggregator.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
// Provenance-includes-location: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/server.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
// Provenance-includes-location: https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/apiserver/apiextensions.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
package aggregator
import (
"crypto/tls"
"fmt"
"net/http"
"strings"
"sync"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
utilnet "k8s.io/apimachinery/pkg/util/net"
"k8s.io/apimachinery/pkg/util/sets"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/healthz"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/tools/cache"
"k8s.io/klog/v2"
v1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1"
v1helper "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1/helper"
aggregatorapiserver "k8s.io/kube-aggregator/pkg/apiserver"
apiregistrationclientset "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset"
apiregistrationclient "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/typed/apiregistration/v1"
apiregistrationInformers "k8s.io/kube-aggregator/pkg/client/informers/externalversions/apiregistration/v1"
"k8s.io/kube-aggregator/pkg/controllers/autoregister"
serviceclientset "github.com/grafana/grafana/pkg/generated/clientset/versioned"
informersv0alpha1 "github.com/grafana/grafana/pkg/generated/informers/externalversions"
"github.com/grafana/grafana/pkg/services/apiserver/options"
)
func CreateAggregatorConfig(commandOptions *options.Options, sharedConfig genericapiserver.RecommendedConfig) (*aggregatorapiserver.Config, informersv0alpha1.SharedInformerFactory, error) {
// Create a fake clientset and informers for the k8s v1 API group.
// These are not used in grafana's aggregator because v1 APIs are not available.
fakev1Informers := informers.NewSharedInformerFactory(fake.NewSimpleClientset(), 10*time.Minute)
serviceClient, err := serviceclientset.NewForConfig(sharedConfig.LoopbackClientConfig)
if err != nil {
return nil, nil, err
}
sharedInformerFactory := informersv0alpha1.NewSharedInformerFactory(
serviceClient,
5*time.Minute, // this is effectively used as a refresh interval right now. Might want to do something nicer later on.
)
serviceResolver := NewExternalNameResolver(sharedInformerFactory.Service().V0alpha1().ExternalNames().Lister())
aggregatorConfig := &aggregatorapiserver.Config{
GenericConfig: &genericapiserver.RecommendedConfig{
Config: sharedConfig.Config,
SharedInformerFactory: fakev1Informers,
ClientConfig: sharedConfig.LoopbackClientConfig,
},
ExtraConfig: aggregatorapiserver.ExtraConfig{
ProxyClientCertFile: commandOptions.AggregatorOptions.ProxyClientCertFile,
ProxyClientKeyFile: commandOptions.AggregatorOptions.ProxyClientKeyFile,
// NOTE: while ProxyTransport can be skipped in the configuration, it allows honoring
// DISABLE_HTTP2, HTTPS_PROXY and NO_PROXY env vars as needed
ProxyTransport: createProxyTransport(),
ServiceResolver: serviceResolver,
},
}
if err := commandOptions.AggregatorOptions.ApplyTo(aggregatorConfig, commandOptions.RecommendedOptions.Etcd, commandOptions.StorageOptions.DataPath); err != nil {
return nil, nil, err
}
return aggregatorConfig, sharedInformerFactory, nil
}
func CreateAggregatorServer(aggregatorConfig *aggregatorapiserver.Config, sharedInformerFactory informersv0alpha1.SharedInformerFactory, delegateAPIServer genericapiserver.DelegationTarget) (*aggregatorapiserver.APIAggregator, error) {
completedConfig := aggregatorConfig.Complete()
aggregatorServer, err := completedConfig.NewWithDelegate(delegateAPIServer)
if err != nil {
return nil, err
}
// create controllers for auto-registration
apiRegistrationClient, err := apiregistrationclient.NewForConfig(completedConfig.GenericConfig.LoopbackClientConfig)
if err != nil {
return nil, err
}
autoRegistrationController := autoregister.NewAutoRegisterController(aggregatorServer.APIRegistrationInformers.Apiregistration().V1().APIServices(), apiRegistrationClient)
apiServices := apiServicesToRegister(delegateAPIServer, autoRegistrationController)
// Imbue all builtin group-priorities onto the aggregated discovery
if completedConfig.GenericConfig.AggregatedDiscoveryGroupManager != nil {
for gv, entry := range APIVersionPriorities {
completedConfig.GenericConfig.AggregatedDiscoveryGroupManager.SetGroupVersionPriority(metav1.GroupVersion(gv), int(entry.Group), int(entry.Version))
}
}
err = aggregatorServer.GenericAPIServer.AddPostStartHook("grafana-apiserver-autoregistration", func(context genericapiserver.PostStartHookContext) error {
go func() {
autoRegistrationController.Run(5, context.StopCh)
}()
return nil
})
if err != nil {
return nil, err
}
err = aggregatorServer.GenericAPIServer.AddBootSequenceHealthChecks(
makeAPIServiceAvailableHealthCheck(
"autoregister-completion",
apiServices,
aggregatorServer.APIRegistrationInformers.Apiregistration().V1().APIServices(),
),
)
if err != nil {
return nil, err
}
apiregistrationClient, err := apiregistrationclientset.NewForConfig(completedConfig.GenericConfig.LoopbackClientConfig)
if err != nil {
return nil, err
}
availableController, err := NewAvailableConditionController(
aggregatorServer.APIRegistrationInformers.Apiregistration().V1().APIServices(),
sharedInformerFactory.Service().V0alpha1().ExternalNames(),
apiregistrationClient.ApiregistrationV1(),
nil,
(func() ([]byte, []byte))(nil),
completedConfig.ExtraConfig.ServiceResolver,
)
if err != nil {
return nil, err
}
aggregatorServer.GenericAPIServer.AddPostStartHookOrDie("apiservice-status-override-available-controller", func(context genericapiserver.PostStartHookContext) error {
// if we end up blocking for long periods of time, we may need to increase workers.
go availableController.Run(5, context.StopCh)
return nil
})
aggregatorServer.GenericAPIServer.AddPostStartHookOrDie("start-grafana-aggregator-informers", func(context genericapiserver.PostStartHookContext) error {
sharedInformerFactory.Start(context.StopCh)
aggregatorServer.APIRegistrationInformers.Start(context.StopCh)
return nil
})
return aggregatorServer, nil
}
func makeAPIService(gv schema.GroupVersion) *v1.APIService {
apiServicePriority, ok := APIVersionPriorities[gv]
if !ok {
// if we aren't found, then we shouldn't register ourselves because it could result in a CRD group version
// being permanently stuck in the APIServices list.
klog.Infof("Skipping APIService creation for %v", gv)
return nil
}
return &v1.APIService{
ObjectMeta: metav1.ObjectMeta{Name: gv.Version + "." + gv.Group},
Spec: v1.APIServiceSpec{
Group: gv.Group,
Version: gv.Version,
GroupPriorityMinimum: apiServicePriority.Group,
VersionPriority: apiServicePriority.Version,
},
}
}
// makeAPIServiceAvailableHealthCheck returns a healthz check that returns healthy
// once all of the specified services have been observed to be available at least once.
func makeAPIServiceAvailableHealthCheck(name string, apiServices []*v1.APIService, apiServiceInformer apiregistrationInformers.APIServiceInformer) healthz.HealthChecker {
// Track the auto-registered API services that have not been observed to be available yet
pendingServiceNamesLock := &sync.RWMutex{}
pendingServiceNames := sets.NewString()
for _, service := range apiServices {
pendingServiceNames.Insert(service.Name)
}
// When an APIService in the list is seen as available, remove it from the pending list
handleAPIServiceChange := func(service *v1.APIService) {
pendingServiceNamesLock.Lock()
defer pendingServiceNamesLock.Unlock()
if !pendingServiceNames.Has(service.Name) {
return
}
if v1helper.IsAPIServiceConditionTrue(service, v1.Available) {
pendingServiceNames.Delete(service.Name)
}
}
// Watch add/update events for APIServices
_, _ = apiServiceInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) { handleAPIServiceChange(obj.(*v1.APIService)) },
UpdateFunc: func(old, new interface{}) { handleAPIServiceChange(new.(*v1.APIService)) },
})
// Don't return healthy until the pending list is empty
return healthz.NamedCheck(name, func(r *http.Request) error {
pendingServiceNamesLock.RLock()
defer pendingServiceNamesLock.RUnlock()
if pendingServiceNames.Len() > 0 {
return fmt.Errorf("missing APIService: %v", pendingServiceNames.List())
}
return nil
})
}
// Priority defines group Priority that is used in discovery. This controls
// group position in the kubectl output.
type Priority struct {
// Group indicates the order of the Group relative to other groups.
Group int32
// Version indicates the relative order of the Version inside of its group.
Version int32
}
// APIVersionPriorities are the proper way to resolve this letting the aggregator know the desired group and version-within-group order of the underlying servers
// is to refactor the genericapiserver.DelegationTarget to include a list of priorities based on which APIs were installed.
// This requires the APIGroupInfo struct to evolve and include the concept of priorities and to avoid mistakes, the core storage map there needs to be updated.
// That ripples out every bit as far as you'd expect, so for 1.7 we'll include the list here instead of being built up during storage.
var APIVersionPriorities = map[schema.GroupVersion]Priority{
{Group: "", Version: "v1"}: {Group: 18000, Version: 1},
// to my knowledge, nothing below here collides
{Group: "admissionregistration.k8s.io", Version: "v1"}: {Group: 16700, Version: 15},
{Group: "admissionregistration.k8s.io", Version: "v1beta1"}: {Group: 16700, Version: 12},
{Group: "admissionregistration.k8s.io", Version: "v1alpha1"}: {Group: 16700, Version: 9},
// Append a new group to the end of the list if unsure.
// You can use min(existing group)-100 as the initial value for a group.
// Version can be set to 9 (to have space around) for a new group.
}
func apiServicesToRegister(delegateAPIServer genericapiserver.DelegationTarget, registration autoregister.AutoAPIServiceRegistration) []*v1.APIService {
apiServices := []*v1.APIService{}
for _, curr := range delegateAPIServer.ListedPaths() {
if curr == "/api/v1" {
apiService := makeAPIService(schema.GroupVersion{Group: "", Version: "v1"})
registration.AddAPIServiceToSyncOnStart(apiService)
apiServices = append(apiServices, apiService)
continue
}
if !strings.HasPrefix(curr, "/apis/") {
continue
}
// this comes back in a list that looks like /apis/rbac.authorization.k8s.io/v1alpha1
tokens := strings.Split(curr, "/")
if len(tokens) != 4 {
continue
}
apiService := makeAPIService(schema.GroupVersion{Group: tokens[2], Version: tokens[3]})
if apiService == nil {
continue
}
registration.AddAPIServiceToSyncOnStart(apiService)
apiServices = append(apiServices, apiService)
}
return apiServices
}
// NOTE: below function imported from https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/server.go#L197
// createProxyTransport creates the dialer infrastructure to connect to the api servers.
func createProxyTransport() *http.Transport {
// NOTE: We don't set proxyDialerFn but the below SetTransportDefaults will
// See https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/net/http.go#L109
var proxyDialerFn utilnet.DialFunc
// Proxying to services is IP-based... don't expect to be able to verify the hostname
proxyTLSClientConfig := &tls.Config{InsecureSkipVerify: true}
proxyTransport := utilnet.SetTransportDefaults(&http.Transport{
DialContext: proxyDialerFn,
TLSClientConfig: proxyTLSClientConfig,
})
return proxyTransport
}

View File

@@ -0,0 +1,466 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/kubernetes/kube-aggregator/blob/master/pkg/controllers/status/available_controller.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
package aggregator
import (
"context"
"fmt"
"net/http"
"net/url"
"reflect"
"sync"
"time"
"github.com/grafana/grafana/pkg/apis/service/v0alpha1"
informersservicev0alpha1 "github.com/grafana/grafana/pkg/generated/informers/externalversions/service/v0alpha1"
listersservicev0alpha1 "github.com/grafana/grafana/pkg/generated/listers/service/v0alpha1"
"k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/transport"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
apiregistrationv1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1"
apiregistrationv1apihelper "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1/helper"
apiregistrationclient "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/typed/apiregistration/v1"
informers "k8s.io/kube-aggregator/pkg/client/informers/externalversions/apiregistration/v1"
listers "k8s.io/kube-aggregator/pkg/client/listers/apiregistration/v1"
"k8s.io/kube-aggregator/pkg/controllers"
)
type certKeyFunc func() ([]byte, []byte)
// ServiceResolver knows how to convert a service reference into an actual location.
type ServiceResolver interface {
ResolveEndpoint(namespace, name string, port int32) (*url.URL, error)
}
// AvailableConditionController handles checking the availability of registered API services.
type AvailableConditionController struct {
apiServiceClient apiregistrationclient.APIServicesGetter
apiServiceLister listers.APIServiceLister
apiServiceSynced cache.InformerSynced
// externalNameLister is used to get the IP to create the transport for
externalNameLister listersservicev0alpha1.ExternalNameLister
servicesSynced cache.InformerSynced
// proxyTransportDial specifies the dial function for creating unencrypted TCP connections.
proxyTransportDial *transport.DialHolder
proxyCurrentCertKeyContent certKeyFunc
serviceResolver ServiceResolver
// To allow injection for testing.
syncFn func(key string) error
queue workqueue.RateLimitingInterface
// map from service-namespace -> service-name -> apiservice names
cache map[string]map[string][]string
// this lock protects operations on the above cache
cacheLock sync.RWMutex
}
// NewAvailableConditionController returns a new AvailableConditionController.
func NewAvailableConditionController(
apiServiceInformer informers.APIServiceInformer,
externalNameInformer informersservicev0alpha1.ExternalNameInformer,
apiServiceClient apiregistrationclient.APIServicesGetter,
proxyTransportDial *transport.DialHolder,
proxyCurrentCertKeyContent certKeyFunc,
serviceResolver ServiceResolver,
) (*AvailableConditionController, error) {
c := &AvailableConditionController{
apiServiceClient: apiServiceClient,
apiServiceLister: apiServiceInformer.Lister(),
externalNameLister: externalNameInformer.Lister(),
serviceResolver: serviceResolver,
queue: workqueue.NewNamedRateLimitingQueue(
// We want a fairly tight requeue time. The controller listens to the API, but because it relies on the routability of the
// service network, it is possible for an external, non-watchable factor to affect availability. This keeps
// the maximum disruption time to a minimum, but it does prevent hot loops.
workqueue.NewItemExponentialFailureRateLimiter(5*time.Millisecond, 30*time.Second),
"AvailableConditionController"),
proxyTransportDial: proxyTransportDial,
proxyCurrentCertKeyContent: proxyCurrentCertKeyContent,
}
// resync on this one because it is low cardinality and rechecking the actual discovery
// allows us to detect health in a more timely fashion when network connectivity to
// nodes is snipped, but the network still attempts to route there. See
// https://github.com/openshift/origin/issues/17159#issuecomment-341798063
apiServiceHandler, _ := apiServiceInformer.Informer().AddEventHandlerWithResyncPeriod(
cache.ResourceEventHandlerFuncs{
AddFunc: c.addAPIService,
UpdateFunc: c.updateAPIService,
DeleteFunc: c.deleteAPIService,
},
30*time.Second)
c.apiServiceSynced = apiServiceHandler.HasSynced
serviceHandler, _ := externalNameInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.addService,
UpdateFunc: c.updateService,
DeleteFunc: c.deleteService,
})
c.servicesSynced = serviceHandler.HasSynced
c.syncFn = c.sync
return c, nil
}
func (c *AvailableConditionController) sync(key string) error {
originalAPIService, err := c.apiServiceLister.Get(key)
if apierrors.IsNotFound(err) {
return nil
}
if err != nil {
return err
}
// if a particular transport was specified, use that otherwise build one
// construct an http client that will ignore TLS verification (if someone owns the network and messes with your status
// that's not so bad) and sets a very short timeout. This is a best effort GET that provides no additional information
transportConfig := &transport.Config{
TLS: transport.TLSConfig{
Insecure: true,
},
DialHolder: c.proxyTransportDial,
}
if c.proxyCurrentCertKeyContent != nil {
proxyClientCert, proxyClientKey := c.proxyCurrentCertKeyContent()
transportConfig.TLS.CertData = proxyClientCert
transportConfig.TLS.KeyData = proxyClientKey
}
restTransport, err := transport.New(transportConfig)
if err != nil {
return err
}
discoveryClient := &http.Client{
Transport: restTransport,
// the request should happen quickly.
Timeout: 5 * time.Second,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
apiService := originalAPIService.DeepCopy()
availableCondition := apiregistrationv1.APIServiceCondition{
Type: apiregistrationv1.Available,
Status: apiregistrationv1.ConditionTrue,
LastTransitionTime: metav1.Now(),
}
// local API services are always considered available
if apiService.Spec.Service == nil {
apiregistrationv1apihelper.SetAPIServiceCondition(apiService, apiregistrationv1apihelper.NewLocalAvailableAPIServiceCondition())
_, err := c.updateAPIServiceStatus(originalAPIService, apiService)
return err
}
_, err = c.externalNameLister.ExternalNames(apiService.Spec.Service.Namespace).Get(apiService.Spec.Service.Name)
if apierrors.IsNotFound(err) {
availableCondition.Status = apiregistrationv1.ConditionFalse
availableCondition.Reason = "ServiceNotFound"
availableCondition.Message = fmt.Sprintf("service/%s in %q is not present", apiService.Spec.Service.Name, apiService.Spec.Service.Namespace)
apiregistrationv1apihelper.SetAPIServiceCondition(apiService, availableCondition)
_, err := c.updateAPIServiceStatus(originalAPIService, apiService)
return err
} else if err != nil {
availableCondition.Status = apiregistrationv1.ConditionUnknown
availableCondition.Reason = "ServiceAccessError"
availableCondition.Message = fmt.Sprintf("service/%s in %q cannot be checked due to: %v", apiService.Spec.Service.Name, apiService.Spec.Service.Namespace, err)
apiregistrationv1apihelper.SetAPIServiceCondition(apiService, availableCondition)
_, err := c.updateAPIServiceStatus(originalAPIService, apiService)
return err
}
// actually try to hit the discovery endpoint when it isn't local and when we're routing as a service.
if apiService.Spec.Service != nil && c.serviceResolver != nil {
attempts := 5
results := make(chan error, attempts)
for i := 0; i < attempts; i++ {
go func() {
discoveryURL, err := c.serviceResolver.ResolveEndpoint(apiService.Spec.Service.Namespace, apiService.Spec.Service.Name, *apiService.Spec.Service.Port)
if err != nil {
results <- err
return
}
// render legacyAPIService health check path when it is delegated to a service
if apiService.Name == "v1." {
discoveryURL.Path = "/api/" + apiService.Spec.Version
} else {
discoveryURL.Path = "/apis/" + apiService.Spec.Group + "/" + apiService.Spec.Version
}
errCh := make(chan error, 1)
go func() {
// be sure to check a URL that the aggregated API server is required to serve
newReq, err := http.NewRequest("GET", discoveryURL.String(), nil)
if err != nil {
errCh <- err
return
}
// setting the system-masters identity ensures that we will always have access rights
transport.SetAuthProxyHeaders(newReq, "system:kube-aggregator", []string{"system:masters"}, nil)
resp, err := discoveryClient.Do(newReq)
if resp != nil {
_ = resp.Body.Close()
// we should always been in the 200s or 300s
if resp.StatusCode < http.StatusOK || resp.StatusCode >= http.StatusMultipleChoices {
errCh <- fmt.Errorf("bad status from %v: %v", discoveryURL, resp.StatusCode)
return
}
}
errCh <- err
}()
select {
case err = <-errCh:
if err != nil {
results <- fmt.Errorf("failing or missing response from %v: %v", discoveryURL, err)
return
}
// we had trouble with slow dial and DNS responses causing us to wait too long.
// we added this as insurance
case <-time.After(6 * time.Second):
results <- fmt.Errorf("timed out waiting for %v", discoveryURL)
return
}
results <- nil
}()
}
var lastError error
for i := 0; i < attempts; i++ {
lastError = <-results
// if we had at least one success, we are successful overall and we can return now
if lastError == nil {
break
}
}
if lastError != nil {
availableCondition.Status = apiregistrationv1.ConditionFalse
availableCondition.Reason = "FailedDiscoveryCheck"
availableCondition.Message = lastError.Error()
apiregistrationv1apihelper.SetAPIServiceCondition(apiService, availableCondition)
_, updateErr := c.updateAPIServiceStatus(originalAPIService, apiService)
if updateErr != nil {
return updateErr
}
// force a requeue to make it very obvious that this will be retried at some point in the future
// along with other requeues done via service change, endpoint change, and resync
return lastError
}
}
availableCondition.Reason = "Passed"
availableCondition.Message = "all checks passed"
apiregistrationv1apihelper.SetAPIServiceCondition(apiService, availableCondition)
_, err = c.updateAPIServiceStatus(originalAPIService, apiService)
return err
}
// updateAPIServiceStatus only issues an update if a change is detected. We have a tight resync loop to quickly detect dead
// apiservices. Doing that means we don't want to quickly issue no-op updates.
func (c *AvailableConditionController) updateAPIServiceStatus(originalAPIService, newAPIService *apiregistrationv1.APIService) (*apiregistrationv1.APIService, error) {
if equality.Semantic.DeepEqual(originalAPIService.Status, newAPIService.Status) {
return newAPIService, nil
}
orig := apiregistrationv1apihelper.GetAPIServiceConditionByType(originalAPIService, apiregistrationv1.Available)
now := apiregistrationv1apihelper.GetAPIServiceConditionByType(newAPIService, apiregistrationv1.Available)
unknown := apiregistrationv1.APIServiceCondition{
Type: apiregistrationv1.Available,
Status: apiregistrationv1.ConditionUnknown,
}
if orig == nil {
orig = &unknown
}
if now == nil {
now = &unknown
}
if *orig != *now {
klog.V(2).InfoS("changing APIService availability", "name", newAPIService.Name, "oldStatus", orig.Status, "newStatus", now.Status, "message", now.Message, "reason", now.Reason)
}
newAPIService, err := c.apiServiceClient.APIServices().UpdateStatus(context.TODO(), newAPIService, metav1.UpdateOptions{})
if err != nil {
return nil, err
}
return newAPIService, nil
}
// Run starts the AvailableConditionController loop which manages the availability condition of API services.
func (c *AvailableConditionController) Run(workers int, stopCh <-chan struct{}) {
defer utilruntime.HandleCrash()
defer c.queue.ShutDown()
klog.Info("Starting AvailableConditionController")
defer klog.Info("Shutting down AvailableConditionController")
// This waits not just for the informers to sync, but for our handlers
// to be called; since the handlers are three different ways of
// enqueueing the same thing, waiting for this permits the queue to
// maximally de-duplicate the entries.
if !controllers.WaitForCacheSync("AvailableConditionCOverrideController", stopCh, c.apiServiceSynced, c.servicesSynced) {
return
}
for i := 0; i < workers; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
<-stopCh
}
func (c *AvailableConditionController) runWorker() {
for c.processNextWorkItem() {
}
}
// processNextWorkItem deals with one key off the queue. It returns false when it's time to quit.
func (c *AvailableConditionController) processNextWorkItem() bool {
key, quit := c.queue.Get()
if quit {
return false
}
defer c.queue.Done(key)
err := c.syncFn(key.(string))
if err == nil {
c.queue.Forget(key)
return true
}
utilruntime.HandleError(fmt.Errorf("%v failed with: %v", key, err))
c.queue.AddRateLimited(key)
return true
}
func (c *AvailableConditionController) addAPIService(obj interface{}) {
castObj := obj.(*apiregistrationv1.APIService)
klog.V(4).Infof("Adding %s", castObj.Name)
if castObj.Spec.Service != nil {
c.rebuildAPIServiceCache()
}
c.queue.Add(castObj.Name)
}
func (c *AvailableConditionController) updateAPIService(oldObj, newObj interface{}) {
castObj := newObj.(*apiregistrationv1.APIService)
oldCastObj := oldObj.(*apiregistrationv1.APIService)
klog.V(4).Infof("Updating %s", oldCastObj.Name)
if !reflect.DeepEqual(castObj.Spec.Service, oldCastObj.Spec.Service) {
c.rebuildAPIServiceCache()
}
c.queue.Add(oldCastObj.Name)
}
func (c *AvailableConditionController) deleteAPIService(obj interface{}) {
castObj, ok := obj.(*apiregistrationv1.APIService)
if !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
klog.Errorf("Couldn't get object from tombstone %#v", obj)
return
}
castObj, ok = tombstone.Obj.(*apiregistrationv1.APIService)
if !ok {
klog.Errorf("Tombstone contained object that is not expected %#v", obj)
return
}
}
klog.V(4).Infof("Deleting %q", castObj.Name)
if castObj.Spec.Service != nil {
c.rebuildAPIServiceCache()
}
c.queue.Add(castObj.Name)
}
func (c *AvailableConditionController) getAPIServicesFor(obj runtime.Object) []string {
metadata, err := meta.Accessor(obj)
if err != nil {
utilruntime.HandleError(err)
return nil
}
c.cacheLock.RLock()
defer c.cacheLock.RUnlock()
return c.cache[metadata.GetNamespace()][metadata.GetName()]
}
// if the service/endpoint handler wins the race against the cache rebuilding, it may queue a no-longer-relevant apiservice
// (which will get processed an extra time - this doesn't matter),
// and miss a newly relevant apiservice (which will get queued by the apiservice handler)
func (c *AvailableConditionController) rebuildAPIServiceCache() {
apiServiceList, _ := c.apiServiceLister.List(labels.Everything())
newCache := map[string]map[string][]string{}
for _, apiService := range apiServiceList {
if apiService.Spec.Service == nil {
continue
}
if newCache[apiService.Spec.Service.Namespace] == nil {
newCache[apiService.Spec.Service.Namespace] = map[string][]string{}
}
newCache[apiService.Spec.Service.Namespace][apiService.Spec.Service.Name] = append(newCache[apiService.Spec.Service.Namespace][apiService.Spec.Service.Name], apiService.Name)
}
c.cacheLock.Lock()
defer c.cacheLock.Unlock()
c.cache = newCache
}
// TODO, think of a way to avoid checking on every service manipulation
func (c *AvailableConditionController) addService(obj interface{}) {
for _, apiService := range c.getAPIServicesFor(obj.(*v0alpha1.ExternalName)) {
c.queue.Add(apiService)
}
}
func (c *AvailableConditionController) updateService(obj, _ interface{}) {
for _, apiService := range c.getAPIServicesFor(obj.(*v0alpha1.ExternalName)) {
c.queue.Add(apiService)
}
}
func (c *AvailableConditionController) deleteService(obj interface{}) {
castObj, ok := obj.(*v0alpha1.ExternalName)
if !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
klog.Errorf("Couldn't get object from tombstone %#v", obj)
return
}
castObj, ok = tombstone.Obj.(*v0alpha1.ExternalName)
if !ok {
klog.Errorf("Tombstone contained object that is not expected %#v", obj)
return
}
}
for _, apiService := range c.getAPIServicesFor(castObj) {
c.queue.Add(apiService)
}
}

View File

@@ -0,0 +1,32 @@
package aggregator
import (
"fmt"
"net"
"net/url"
"k8s.io/kube-aggregator/pkg/apiserver"
servicelistersv0alpha1 "github.com/grafana/grafana/pkg/generated/listers/service/v0alpha1"
)
func NewExternalNameResolver(externalNames servicelistersv0alpha1.ExternalNameLister) apiserver.ServiceResolver {
return &externalNameResolver{
externalNames: externalNames,
}
}
type externalNameResolver struct {
externalNames servicelistersv0alpha1.ExternalNameLister
}
func (r *externalNameResolver) ResolveEndpoint(namespace, name string, port int32) (*url.URL, error) {
extName, err := r.externalNames.ExternalNames(namespace).Get(name)
if err != nil {
return nil, err
}
return &url.URL{
Scheme: "https",
Host: net.JoinHostPort(extName.Spec.Host, fmt.Sprintf("%d", port)),
}, nil
}

View File

@@ -0,0 +1,19 @@
package authorizer
import (
"context"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
var _ authorizer.Authorizer = (*impersonationAuthorizer)(nil)
// ImpersonationAuthorizer denies all impersonation requests.
type impersonationAuthorizer struct{}
func (auth impersonationAuthorizer) Authorize(ctx context.Context, a authorizer.Attributes) (authorized authorizer.Decision, reason string, err error) {
if a.GetVerb() == "impersonate" {
return authorizer.DecisionDeny, "user impersonation is not supported", nil
}
return authorizer.DecisionNoOpinion, "", nil
}

View File

@@ -0,0 +1,46 @@
package authorizer
import (
"context"
"testing"
"github.com/stretchr/testify/require"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
func TestImpersonationAuthorizer_Authorize(t *testing.T) {
auth := impersonationAuthorizer{}
t.Run("impersonate verb", func(t *testing.T) {
attrs := &fakeAttributes{
verb: "impersonate",
}
authorized, reason, err := auth.Authorize(context.Background(), attrs)
require.Equal(t, authorizer.DecisionDeny, authorized)
require.Equal(t, "user impersonation is not supported", reason)
require.NoError(t, err)
})
t.Run("other verb", func(t *testing.T) {
attrs := &fakeAttributes{
verb: "get",
}
authorized, reason, err := auth.Authorize(context.Background(), attrs)
require.Equal(t, authorizer.DecisionNoOpinion, authorized)
require.Equal(t, "", reason)
require.NoError(t, err)
})
}
type fakeAttributes struct {
authorizer.Attributes
verb string
}
func (a fakeAttributes) GetVerb() string {
return a.verb
}

View File

@@ -0,0 +1,68 @@
package authorizer
import (
"context"
"fmt"
"k8s.io/apiserver/pkg/authorization/authorizer"
"github.com/grafana/grafana/pkg/infra/appcontext"
"github.com/grafana/grafana/pkg/infra/log"
grafanarequest "github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
"github.com/grafana/grafana/pkg/services/org"
)
var _ authorizer.Authorizer = &orgIDAuthorizer{}
type orgIDAuthorizer struct {
log log.Logger
org org.Service
}
func newOrgIDAuthorizer(orgService org.Service) *orgIDAuthorizer {
return &orgIDAuthorizer{
log: log.New("grafana-apiserver.authorizer.orgid"),
org: orgService,
}
}
func (auth orgIDAuthorizer) Authorize(ctx context.Context, a authorizer.Attributes) (authorized authorizer.Decision, reason string, err error) {
signedInUser, err := appcontext.User(ctx)
if err != nil {
return authorizer.DecisionDeny, fmt.Sprintf("error getting signed in user: %v", err), nil
}
info, err := grafanarequest.ParseNamespace(a.GetNamespace())
if err != nil {
return authorizer.DecisionDeny, fmt.Sprintf("error reading namespace: %v", err), nil
}
// No opinion when the namespace is arbitrary
if info.OrgID == -1 {
return authorizer.DecisionNoOpinion, "", nil
}
if info.StackID != "" {
return authorizer.DecisionDeny, "using a stack namespace requires deployment with a fixed stack id", nil
}
// Quick check that the same org is used
if signedInUser.OrgID == info.OrgID {
return authorizer.DecisionNoOpinion, "", nil
}
// Check if the user has access to the specified org
query := org.GetUserOrgListQuery{UserID: signedInUser.UserID}
result, err := auth.org.GetUserOrgList(ctx, &query)
if err != nil {
return authorizer.DecisionDeny, "error getting user org list", err
}
for _, org := range result {
if org.OrgID == info.OrgID {
return authorizer.DecisionNoOpinion, "", nil
}
}
return authorizer.DecisionDeny, fmt.Sprintf("user %d is not a member of org %d", signedInUser.UserID, info.OrgID), nil
}

View File

@@ -0,0 +1,55 @@
package authorizer
import (
"context"
"fmt"
"k8s.io/apiserver/pkg/authorization/authorizer"
"github.com/grafana/grafana/pkg/infra/appcontext"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/org"
)
var _ authorizer.Authorizer = &orgRoleAuthorizer{}
type orgRoleAuthorizer struct {
log log.Logger
}
func newOrgRoleAuthorizer(orgService org.Service) *orgRoleAuthorizer {
return &orgRoleAuthorizer{log: log.New("grafana-apiserver.authorizer.orgrole")}
}
func (auth orgRoleAuthorizer) Authorize(ctx context.Context, a authorizer.Attributes) (authorized authorizer.Decision, reason string, err error) {
signedInUser, err := appcontext.User(ctx)
if err != nil {
return authorizer.DecisionDeny, fmt.Sprintf("error getting signed in user: %v", err), nil
}
switch signedInUser.OrgRole {
case org.RoleAdmin:
return authorizer.DecisionAllow, "", nil
case org.RoleEditor:
switch a.GetVerb() {
case "get", "list", "watch", "create", "update", "patch", "delete", "put", "post":
return authorizer.DecisionAllow, "", nil
default:
return authorizer.DecisionDeny, errorMessageForGrafanaOrgRole(string(signedInUser.OrgRole), a), nil
}
case org.RoleViewer:
switch a.GetVerb() {
case "get", "list", "watch":
return authorizer.DecisionAllow, "", nil
default:
return authorizer.DecisionDeny, errorMessageForGrafanaOrgRole(string(signedInUser.OrgRole), a), nil
}
case org.RoleNone:
return authorizer.DecisionDeny, errorMessageForGrafanaOrgRole(string(signedInUser.OrgRole), a), nil
}
return authorizer.DecisionDeny, "", nil
}
func errorMessageForGrafanaOrgRole(grafanaOrgRole string, a authorizer.Attributes) string {
return fmt.Sprintf("Grafana org role (%s) didn't allow %s access on requested resource=%s, path=%s", grafanaOrgRole, a.GetVerb(), a.GetResource(), a.GetPath())
}

View File

@@ -0,0 +1,65 @@
package authorizer
import (
"context"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/authorization/authorizer"
"k8s.io/apiserver/pkg/authorization/union"
orgsvc "github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/setting"
)
var _ authorizer.Authorizer = (*GrafanaAuthorizer)(nil)
type GrafanaAuthorizer struct {
apis map[string]authorizer.Authorizer
auth authorizer.Authorizer
}
func NewGrafanaAuthorizer(cfg *setting.Cfg, orgService orgsvc.Service) *GrafanaAuthorizer {
authorizers := []authorizer.Authorizer{
&impersonationAuthorizer{},
}
// In Hosted grafana, the StackID replaces the orgID as a valid namespace
if cfg.StackID != "" {
authorizers = append(authorizers, newStackIDAuthorizer(cfg))
} else {
authorizers = append(authorizers, newOrgIDAuthorizer(orgService))
}
// Individual services may have explicit implementations
apis := make(map[string]authorizer.Authorizer)
authorizers = append(authorizers, &authorizerForAPI{apis})
// org role is last -- and will return allow for verbs that match expectations
// The apiVersion flavors will run first and can return early when FGAC has appropriate rules
authorizers = append(authorizers, newOrgRoleAuthorizer(orgService))
return &GrafanaAuthorizer{
apis: apis,
auth: union.New(authorizers...),
}
}
func (a *GrafanaAuthorizer) Register(gv schema.GroupVersion, fn authorizer.Authorizer) {
a.apis[gv.String()] = fn
}
// Authorize implements authorizer.Authorizer.
func (a *GrafanaAuthorizer) Authorize(ctx context.Context, attr authorizer.Attributes) (authorized authorizer.Decision, reason string, err error) {
return a.auth.Authorize(ctx, attr)
}
type authorizerForAPI struct {
apis map[string]authorizer.Authorizer
}
func (a *authorizerForAPI) Authorize(ctx context.Context, attr authorizer.Attributes) (authorized authorizer.Decision, reason string, err error) {
auth, ok := a.apis[attr.GetAPIGroup()+"/"+attr.GetAPIVersion()]
if ok {
return auth.Authorize(ctx, attr)
}
return authorizer.DecisionNoOpinion, "", nil
}

View File

@@ -0,0 +1,56 @@
package authorizer
import (
"context"
"fmt"
"k8s.io/apiserver/pkg/authorization/authorizer"
"github.com/grafana/grafana/pkg/infra/appcontext"
"github.com/grafana/grafana/pkg/infra/log"
grafanarequest "github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
"github.com/grafana/grafana/pkg/setting"
)
var _ authorizer.Authorizer = &stackIDAuthorizer{}
type stackIDAuthorizer struct {
log log.Logger
stackID string
}
func newStackIDAuthorizer(cfg *setting.Cfg) *stackIDAuthorizer {
return &stackIDAuthorizer{
log: log.New("grafana-apiserver.authorizer.stackid"),
stackID: cfg.StackID, // this lets a single tenant grafana validate stack id (rather than orgs)
}
}
func (auth stackIDAuthorizer) Authorize(ctx context.Context, a authorizer.Attributes) (authorized authorizer.Decision, reason string, err error) {
signedInUser, err := appcontext.User(ctx)
if err != nil {
return authorizer.DecisionDeny, fmt.Sprintf("error getting signed in user: %v", err), nil
}
info, err := grafanarequest.ParseNamespace(a.GetNamespace())
if err != nil {
return authorizer.DecisionDeny, fmt.Sprintf("error reading namespace: %v", err), nil
}
// No opinion when the namespace is arbitrary
if info.OrgID == -1 {
return authorizer.DecisionNoOpinion, "", nil
}
if info.StackID != auth.stackID {
return authorizer.DecisionDeny, "wrong stack id is selected", nil
}
if info.OrgID != 1 {
return authorizer.DecisionDeny, "cloud instance requires org 1", nil
}
if signedInUser.OrgID != 1 {
return authorizer.DecisionDeny, "user must be in org 1", nil
}
return authorizer.DecisionNoOpinion, "", nil
}

View File

@@ -0,0 +1,64 @@
package builder
import (
"net/http"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apiserver/pkg/authorization/authorizer"
"k8s.io/apiserver/pkg/registry/generic"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/kube-openapi/pkg/common"
"k8s.io/kube-openapi/pkg/spec3"
)
// TODO: this (or something like it) belongs in grafana-app-sdk,
// but lets keep it here while we iterate on a few simple examples
type APIGroupBuilder interface {
// Get the main group name
GetGroupVersion() schema.GroupVersion
// Add the kinds to the server scheme
InstallSchema(scheme *runtime.Scheme) error
// Build the group+version behavior
GetAPIGroupInfo(
scheme *runtime.Scheme,
codecs serializer.CodecFactory,
optsGetter generic.RESTOptionsGetter,
dualWrite bool,
) (*genericapiserver.APIGroupInfo, error)
// Get OpenAPI definitions
GetOpenAPIDefinitions() common.GetOpenAPIDefinitions
// Get the API routes for each version
GetAPIRoutes() *APIRoutes
// Optionally add an authorization hook
// Standard namespace checking will happen before this is called, specifically
// the namespace must matches an org|stack that the user belongs to
GetAuthorizer() authorizer.Authorizer
}
// This is used to implement dynamic sub-resources like pods/x/logs
type APIRouteHandler struct {
Path string // added to the appropriate level
Spec *spec3.PathProps // Exposed in the open api service discovery
Handler http.HandlerFunc // when Level = resource, the resource will be available in context
}
// APIRoutes define explicit HTTP handlers in an apiserver
// TBD: is this actually necessary -- there may be more k8s native options for this
type APIRoutes struct {
// Root handlers are registered directly after the apiVersion identifier
Root []APIRouteHandler
// Namespace handlers are mounted under the namespace
Namespace []APIRouteHandler
}
type APIRegistrar interface {
RegisterAPI(builder APIGroupBuilder)
}

View File

@@ -0,0 +1,145 @@
package builder
import (
"fmt"
"net/http"
goruntime "runtime"
"runtime/debug"
"strconv"
"strings"
"time"
"golang.org/x/mod/semver"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apimachinery/pkg/version"
openapinamer "k8s.io/apiserver/pkg/endpoints/openapi"
"k8s.io/apiserver/pkg/registry/generic"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/util/openapi"
k8sscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/kube-openapi/pkg/common"
"github.com/grafana/grafana/pkg/setting"
)
func SetupConfig(
scheme *runtime.Scheme,
serverConfig *genericapiserver.RecommendedConfig,
builders []APIGroupBuilder,
) error {
defsGetter := GetOpenAPIDefinitions(builders)
serverConfig.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(
openapi.GetOpenAPIDefinitionsWithoutDisabledFeatures(defsGetter),
openapinamer.NewDefinitionNamer(scheme, k8sscheme.Scheme))
serverConfig.OpenAPIV3Config = genericapiserver.DefaultOpenAPIV3Config(
openapi.GetOpenAPIDefinitionsWithoutDisabledFeatures(defsGetter),
openapinamer.NewDefinitionNamer(scheme, k8sscheme.Scheme))
// Add the custom routes to service discovery
serverConfig.OpenAPIV3Config.PostProcessSpec = getOpenAPIPostProcessor(builders)
serverConfig.OpenAPIV3Config.GetOperationIDAndTagsFromRoute = func(r common.Route) (string, []string, error) {
tags := []string{}
prop, ok := r.Metadata()["x-kubernetes-group-version-kind"]
if ok {
gvk, ok := prop.(metav1.GroupVersionKind)
if ok && gvk.Kind != "" {
tags = append(tags, gvk.Kind)
}
}
return r.OperationName(), tags, nil
}
// Set the swagger build versions
serverConfig.OpenAPIConfig.Info.Version = setting.BuildVersion
serverConfig.OpenAPIV3Config.Info.Version = setting.BuildVersion
serverConfig.SkipOpenAPIInstallation = false
serverConfig.BuildHandlerChainFunc = func(delegateHandler http.Handler, c *genericapiserver.Config) http.Handler {
// Call DefaultBuildHandlerChain on the main entrypoint http.Handler
// See https://github.com/kubernetes/apiserver/blob/v0.28.0/pkg/server/config.go#L906
// DefaultBuildHandlerChain provides many things, notably CORS, HSTS, cache-control, authz and latency tracking
requestHandler, err := getAPIHandler(
delegateHandler,
c.LoopbackClientConfig,
builders)
if err != nil {
panic(fmt.Sprintf("could not build handler chain func: %s", err.Error()))
}
return genericapiserver.DefaultBuildHandlerChain(requestHandler, c)
}
k8sVersion, err := getK8sApiserverVersion()
if err != nil {
return err
}
before, after, _ := strings.Cut(setting.BuildVersion, ".")
serverConfig.Version = &version.Info{
Major: before,
Minor: after,
GoVersion: goruntime.Version(),
Platform: fmt.Sprintf("%s/%s", goruntime.GOOS, goruntime.GOARCH),
Compiler: goruntime.Compiler,
GitTreeState: setting.BuildBranch,
GitCommit: setting.BuildCommit,
BuildDate: time.Unix(setting.BuildStamp, 0).UTC().Format(time.DateTime),
GitVersion: k8sVersion,
}
return nil
}
func InstallAPIs(
scheme *runtime.Scheme,
codecs serializer.CodecFactory,
server *genericapiserver.GenericAPIServer,
optsGetter generic.RESTOptionsGetter,
builders []APIGroupBuilder,
dualWrite bool,
) error {
for _, b := range builders {
g, err := b.GetAPIGroupInfo(scheme, codecs, optsGetter, dualWrite)
if err != nil {
return err
}
if g == nil || len(g.PrioritizedVersions) < 1 {
continue
}
err = server.InstallAPIGroup(g)
if err != nil {
return err
}
}
return nil
}
// find the k8s version according to build info
func getK8sApiserverVersion() (string, error) {
bi, ok := debug.ReadBuildInfo()
if !ok {
return "", fmt.Errorf("debug.ReadBuildInfo() failed")
}
if len(bi.Deps) == 0 {
return "v?.?", nil // this is normal while debugging
}
for _, dep := range bi.Deps {
if dep.Path == "k8s.io/apiserver" {
if !semver.IsValid(dep.Version) {
return "", fmt.Errorf("invalid semantic version for k8s.io/apiserver")
}
// v0 => v1
majorVersion := strings.TrimPrefix(semver.Major(dep.Version), "v")
majorInt, err := strconv.Atoi(majorVersion)
if err != nil {
return "", fmt.Errorf("could not convert majorVersion to int. majorVersion: %s", majorVersion)
}
newMajor := fmt.Sprintf("v%d", majorInt+1)
return strings.Replace(dep.Version, semver.Major(dep.Version), newMajor, 1), nil
}
}
return "", fmt.Errorf("could not find k8s.io/apiserver in build info")
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,168 @@
package builder
import (
"fmt"
"net/http"
"github.com/gorilla/mux"
restclient "k8s.io/client-go/rest"
"k8s.io/kube-openapi/pkg/spec3"
"k8s.io/kube-openapi/pkg/validation/spec"
"github.com/grafana/grafana/pkg/setting"
)
type requestHandler struct {
router *mux.Router
}
func getAPIHandler(delegateHandler http.Handler, restConfig *restclient.Config, builders []APIGroupBuilder) (http.Handler, error) {
useful := false // only true if any routes exist anywhere
router := mux.NewRouter()
for _, builder := range builders {
routes := builder.GetAPIRoutes()
if routes == nil {
continue
}
gv := builder.GetGroupVersion()
prefix := "/apis/" + gv.String()
// Root handlers
var sub *mux.Router
for _, route := range routes.Root {
if sub == nil {
sub = router.PathPrefix(prefix).Subrouter()
sub.MethodNotAllowedHandler = &methodNotAllowedHandler{}
}
useful = true
methods, err := methodsFromSpec(route.Path, route.Spec)
if err != nil {
return nil, err
}
sub.HandleFunc("/"+route.Path, route.Handler).
Methods(methods...)
}
// Namespace handlers
sub = nil
prefix += "/namespaces/{namespace}"
for _, route := range routes.Namespace {
if sub == nil {
sub = router.PathPrefix(prefix).Subrouter()
sub.MethodNotAllowedHandler = &methodNotAllowedHandler{}
}
useful = true
methods, err := methodsFromSpec(route.Path, route.Spec)
if err != nil {
return nil, err
}
sub.HandleFunc("/"+route.Path, route.Handler).
Methods(methods...)
}
}
if !useful {
return delegateHandler, nil
}
// Per Gorilla Mux issue here: https://github.com/gorilla/mux/issues/616#issuecomment-798807509
// default handler must come last
router.PathPrefix("/").Handler(delegateHandler)
return &requestHandler{
router: router,
}, nil
}
func (h *requestHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
h.router.ServeHTTP(w, req)
}
func methodsFromSpec(slug string, props *spec3.PathProps) ([]string, error) {
if props == nil {
return []string{"GET", "POST", "PUT", "PATCH", "DELETE"}, nil
}
methods := make([]string, 0)
if props.Get != nil {
methods = append(methods, "GET")
}
if props.Post != nil {
methods = append(methods, "POST")
}
if props.Put != nil {
methods = append(methods, "PUT")
}
if props.Patch != nil {
methods = append(methods, "PATCH")
}
if props.Delete != nil {
methods = append(methods, "DELETE")
}
if len(methods) == 0 {
return nil, fmt.Errorf("invalid OpenAPI Spec for slug=%s without any methods in PathProps", slug)
}
return methods, nil
}
type methodNotAllowedHandler struct{}
func (h *methodNotAllowedHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(405) // method not allowed
}
// Modify the the OpenAPI spec to include the additional routes.
// Currently this requires: https://github.com/kubernetes/kube-openapi/pull/420
// In future k8s release, the hook will use Config3 rather than the same hook for both v2 and v3
func getOpenAPIPostProcessor(builders []APIGroupBuilder) func(*spec3.OpenAPI) (*spec3.OpenAPI, error) {
return func(s *spec3.OpenAPI) (*spec3.OpenAPI, error) {
if s.Paths == nil {
return s, nil
}
for _, b := range builders {
routes := b.GetAPIRoutes()
gv := b.GetGroupVersion()
prefix := "/apis/" + gv.String() + "/"
if s.Paths.Paths[prefix] != nil {
copy := spec3.OpenAPI{
Version: s.Version,
Info: &spec.Info{
InfoProps: spec.InfoProps{
Title: gv.String(),
Version: setting.BuildVersion,
},
},
Components: s.Components,
ExternalDocs: s.ExternalDocs,
Servers: s.Servers,
Paths: s.Paths,
}
if routes == nil {
routes = &APIRoutes{}
}
for _, route := range routes.Root {
copy.Paths.Paths[prefix+route.Path] = &spec3.Path{
PathProps: *route.Spec,
}
}
for _, route := range routes.Namespace {
copy.Paths.Paths[prefix+"namespaces/{namespace}/"+route.Path] = &spec3.Path{
PathProps: *route.Spec,
}
}
return &copy, nil
}
}
return s, nil
}
}

View File

@@ -0,0 +1,48 @@
package apiserver
import (
"fmt"
"net"
"path/filepath"
"strconv"
"github.com/grafana/grafana/pkg/services/apiserver/options"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/setting"
)
func applyGrafanaConfig(cfg *setting.Cfg, features featuremgmt.FeatureToggles, o *options.Options) {
defaultLogLevel := 0
ip := net.ParseIP(cfg.HTTPAddr)
apiURL := cfg.AppURL
port, err := strconv.Atoi(cfg.HTTPPort)
if err != nil {
port = 3000
}
if cfg.Env == setting.Dev {
defaultLogLevel = 10
port = 6443
ip = net.ParseIP("127.0.0.1")
apiURL = fmt.Sprintf("https://%s:%d", ip, port)
}
host := fmt.Sprintf("%s:%d", ip, port)
o.RecommendedOptions.Etcd.StorageConfig.Transport.ServerList = cfg.SectionWithEnvOverrides("grafana-apiserver").Key("etcd_servers").Strings(",")
o.RecommendedOptions.SecureServing.BindAddress = ip
o.RecommendedOptions.SecureServing.BindPort = port
o.RecommendedOptions.Authentication.RemoteKubeConfigFileOptional = true
o.RecommendedOptions.Authorization.RemoteKubeConfigFileOptional = true
o.RecommendedOptions.Admission = nil
o.RecommendedOptions.CoreAPI = nil
o.StorageOptions.StorageType = options.StorageType(cfg.SectionWithEnvOverrides("grafana-apiserver").Key("storage_type").MustString(string(options.StorageTypeLegacy)))
o.StorageOptions.DataPath = filepath.Join(cfg.DataPath, "grafana-apiserver")
o.ExtraOptions.DevMode = features.IsEnabledGlobally(featuremgmt.FlagGrafanaAPIServerEnsureKubectlAccess)
o.ExtraOptions.ExternalAddress = host
o.ExtraOptions.APIURL = apiURL
o.ExtraOptions.Verbosity = defaultLogLevel
}

View File

@@ -0,0 +1,91 @@
package request
import (
"context"
"fmt"
"strconv"
"strings"
"k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/grafana/pkg/infra/appcontext"
"github.com/grafana/grafana/pkg/setting"
)
type NamespaceInfo struct {
// OrgID defined in namespace (1 when using stack ids)
OrgID int64
// The cloud stack ID (must match the value in cfg.Settings)
StackID string
// The original namespace string regardless the input
Value string
}
// NamespaceMapper converts an orgID into a namespace
type NamespaceMapper = func(orgId int64) string
// GetNamespaceMapper returns a function that will convert orgIds into a consistent namespace
func GetNamespaceMapper(cfg *setting.Cfg) NamespaceMapper {
if cfg != nil && cfg.StackID != "" {
return func(orgId int64) string { return "stack-" + cfg.StackID }
}
return func(orgId int64) string {
if orgId == 1 {
return "default"
}
return fmt.Sprintf("org-%d", orgId)
}
}
func NamespaceInfoFrom(ctx context.Context, requireOrgID bool) (NamespaceInfo, error) {
info, err := ParseNamespace(request.NamespaceValue(ctx))
if err == nil && requireOrgID && info.OrgID < 1 {
return info, fmt.Errorf("expected valid orgId in namespace")
}
return info, err
}
func ParseNamespace(ns string) (NamespaceInfo, error) {
info := NamespaceInfo{Value: ns, OrgID: -1}
if ns == "default" {
info.OrgID = 1
return info, nil
}
if strings.HasPrefix(ns, "org-") {
id, err := strconv.Atoi(ns[4:])
if id < 1 {
return info, fmt.Errorf("invalid org id")
}
if id == 1 {
return info, fmt.Errorf("use default rather than org-1")
}
info.OrgID = int64(id)
return info, err
}
if strings.HasPrefix(ns, "stack-") {
info.StackID = ns[6:]
if len(info.StackID) < 2 {
return info, fmt.Errorf("invalid stack id")
}
info.OrgID = 1
return info, nil
}
return info, nil
}
func OrgIDForList(ctx context.Context) (int64, error) {
ns := request.NamespaceValue(ctx)
if ns == "" {
user, err := appcontext.User(ctx)
if user != nil {
return user.OrgID, err
}
return -1, err
}
info, err := ParseNamespace(ns)
return info.OrgID, err
}

View File

@@ -0,0 +1,164 @@
package request_test
import (
"testing"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
"github.com/grafana/grafana/pkg/setting"
)
func TestParseNamespace(t *testing.T) {
tests := []struct {
name string
namespace string
expected request.NamespaceInfo
expectErr bool
}{
{
name: "empty namespace",
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "incorrect number of parts",
namespace: "org-123-a",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "org id not a number",
namespace: "org-invalid",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "valid org id",
namespace: "org-123",
expected: request.NamespaceInfo{
OrgID: 123,
},
},
{
name: "org should not be 1 in the namespace",
namespace: "org-1",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "can not be negative",
namespace: "org--5",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "can not be zero",
namespace: "org-0",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "default is org 1",
namespace: "default",
expected: request.NamespaceInfo{
OrgID: 1,
},
},
{
name: "valid stack",
namespace: "stack-abcdef",
expected: request.NamespaceInfo{
OrgID: 1,
StackID: "abcdef",
},
},
{
name: "invalid stack id",
namespace: "stack-",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
},
},
{
name: "invalid stack id (too short)",
namespace: "stack-1",
expectErr: true,
expected: request.NamespaceInfo{
OrgID: -1,
StackID: "1",
},
},
{
name: "other namespace",
namespace: "anything",
expected: request.NamespaceInfo{
OrgID: -1,
Value: "anything",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
info, err := request.ParseNamespace(tt.namespace)
if tt.expectErr != (err != nil) {
t.Errorf("ParseNamespace() returned %+v, expected an error", info)
}
if info.OrgID != tt.expected.OrgID {
t.Errorf("ParseNamespace() [OrgID] returned %d, expected %d", info.OrgID, tt.expected.OrgID)
}
if info.StackID != tt.expected.StackID {
t.Errorf("ParseNamespace() [StackID] returned %s, expected %s", info.StackID, tt.expected.StackID)
}
if info.Value != tt.namespace {
t.Errorf("ParseNamespace() [Value] returned %s, expected %s", info.Value, tt.namespace)
}
})
}
}
func TestNamespaceMapper(t *testing.T) {
tests := []struct {
name string
cfg string
orgId int64
expected string
}{
{
name: "default namespace",
orgId: 1,
expected: "default",
},
{
name: "with org",
orgId: 123,
expected: "org-123",
},
{
name: "with stackId",
cfg: "abc",
orgId: 123, // ignored
expected: "stack-abc",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mapper := request.GetNamespaceMapper(&setting.Cfg{StackID: tt.cfg})
require.Equal(t, tt.expected, mapper(tt.orgId))
})
}
}

View File

@@ -0,0 +1,112 @@
package options
import (
"github.com/spf13/pflag"
v1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
openapinamer "k8s.io/apiserver/pkg/endpoints/openapi"
genericfeatures "k8s.io/apiserver/pkg/features"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/options"
"k8s.io/apiserver/pkg/server/resourceconfig"
utilfeature "k8s.io/apiserver/pkg/util/feature"
apiregistrationv1beta1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1beta1"
aggregatorapiserver "k8s.io/kube-aggregator/pkg/apiserver"
aggregatorscheme "k8s.io/kube-aggregator/pkg/apiserver/scheme"
aggregatoropenapi "k8s.io/kube-aggregator/pkg/generated/openapi"
"k8s.io/kube-openapi/pkg/common"
servicev0alpha1 "github.com/grafana/grafana/pkg/apis/service/v0alpha1"
filestorage "github.com/grafana/grafana/pkg/services/apiserver/storage/file"
)
// AggregatorServerOptions contains the state for the aggregator apiserver
type AggregatorServerOptions struct {
AlternateDNS []string
ProxyClientCertFile string
ProxyClientKeyFile string
}
func NewAggregatorServerOptions() *AggregatorServerOptions {
return &AggregatorServerOptions{}
}
func (o *AggregatorServerOptions) getMergedOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
aggregatorAPIs := aggregatoropenapi.GetOpenAPIDefinitions(ref)
return aggregatorAPIs
}
func (o *AggregatorServerOptions) AddFlags(fs *pflag.FlagSet) {
if o == nil {
return
}
fs.StringVar(&o.ProxyClientCertFile, "proxy-client-cert-file", o.ProxyClientCertFile,
"path to proxy client cert file")
fs.StringVar(&o.ProxyClientKeyFile, "proxy-client-key-file", o.ProxyClientKeyFile,
"path to proxy client cert file")
}
func (o *AggregatorServerOptions) Validate() []error {
if o == nil {
return nil
}
// TODO: do we need to validate anything here?
return nil
}
func (o *AggregatorServerOptions) ApplyTo(aggregatorConfig *aggregatorapiserver.Config, etcdOpts *options.EtcdOptions, dataPath string) error {
genericConfig := aggregatorConfig.GenericConfig
genericConfig.PostStartHooks = map[string]genericapiserver.PostStartHookConfigEntry{}
genericConfig.RESTOptionsGetter = nil
if utilfeature.DefaultFeatureGate.Enabled(genericfeatures.StorageVersionAPI) &&
utilfeature.DefaultFeatureGate.Enabled(genericfeatures.APIServerIdentity) {
// Add StorageVersionPrecondition handler to aggregator-apiserver.
// The handler will block write requests to built-in resources until the
// target resources' storage versions are up-to-date.
genericConfig.BuildHandlerChainFunc = genericapiserver.BuildHandlerChainWithStorageVersionPrecondition
}
// copy the etcd options so we don't mutate originals.
// we assume that the etcd options have been completed already. avoid messing with anything outside
// of changes to StorageConfig as that may lead to unexpected behavior when the options are applied.
etcdOptions := *etcdOpts
etcdOptions.StorageConfig.Codec = aggregatorscheme.Codecs.LegacyCodec(v1.SchemeGroupVersion,
apiregistrationv1beta1.SchemeGroupVersion,
servicev0alpha1.SchemeGroupVersion)
etcdOptions.StorageConfig.EncodeVersioner = runtime.NewMultiGroupVersioner(v1.SchemeGroupVersion,
schema.GroupKind{Group: apiregistrationv1beta1.GroupName},
schema.GroupKind{Group: servicev0alpha1.GROUP})
etcdOptions.SkipHealthEndpoints = true // avoid double wiring of health checks
if err := etcdOptions.ApplyTo(&genericConfig.Config); err != nil {
return err
}
// override the RESTOptionsGetter to use the file storage options getter
aggregatorConfig.GenericConfig.RESTOptionsGetter = filestorage.NewRESTOptionsGetter(dataPath, etcdOptions.StorageConfig)
// prevent generic API server from installing the OpenAPI handler. Aggregator server has its own customized OpenAPI handler.
genericConfig.SkipOpenAPIInstallation = true
mergedResourceConfig, err := resourceconfig.MergeAPIResourceConfigs(aggregatorapiserver.DefaultAPIResourceConfigSource(), nil, aggregatorscheme.Scheme)
if err != nil {
return err
}
genericConfig.MergedResourceConfig = mergedResourceConfig
namer := openapinamer.NewDefinitionNamer(aggregatorscheme.Scheme)
genericConfig.OpenAPIV3Config = genericapiserver.DefaultOpenAPIV3Config(o.getMergedOpenAPIDefinitions, namer)
genericConfig.OpenAPIV3Config.Info.Title = "Kubernetes"
genericConfig.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(o.getMergedOpenAPIDefinitions, namer)
genericConfig.OpenAPIConfig.Info.Title = "Kubernetes"
genericConfig.PostStartHooks = map[string]genericapiserver.PostStartHookConfigEntry{}
// These hooks use v1 informers, which are not available in the grafana aggregator.
genericConfig.DisabledPostStartHooks = genericConfig.DisabledPostStartHooks.Insert("apiservice-status-available-controller")
genericConfig.DisabledPostStartHooks = genericConfig.DisabledPostStartHooks.Insert("start-kube-aggregator-informers")
return nil
}

View File

@@ -0,0 +1,46 @@
package options
import (
"strconv"
"github.com/go-logr/logr"
"github.com/spf13/pflag"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/component-base/logs"
"k8s.io/klog/v2"
)
type ExtraOptions struct {
DevMode bool
ExternalAddress string
APIURL string
Verbosity int
}
func NewExtraOptions() *ExtraOptions {
return &ExtraOptions{
DevMode: false,
Verbosity: 0,
}
}
func (o *ExtraOptions) AddFlags(fs *pflag.FlagSet) {
fs.BoolVar(&o.DevMode, "grafana-apiserver-dev-mode", o.DevMode, "Enable dev mode")
fs.StringVar(&o.ExternalAddress, "grafana-apiserver-host", o.ExternalAddress, "Host")
fs.StringVar(&o.APIURL, "grafana-apiserver-api-url", o.APIURL, "API URL")
fs.IntVar(&o.Verbosity, "verbosity", o.Verbosity, "Verbosity")
}
func (o *ExtraOptions) Validate() []error {
return nil
}
func (o *ExtraOptions) ApplyTo(c *genericapiserver.RecommendedConfig) error {
logger := logr.New(newLogAdapter(o.Verbosity))
klog.SetLoggerWithOptions(logger, klog.ContextualLogger(true))
if _, err := logs.GlogSetter(strconv.Itoa(o.Verbosity)); err != nil {
logger.Error(err, "failed to set log level")
}
c.ExternalAddress = o.ExternalAddress
return nil
}

View File

@@ -0,0 +1,54 @@
package options
import (
"strings"
"github.com/go-logr/logr"
"github.com/grafana/grafana/pkg/infra/log"
)
var _ logr.LogSink = (*logAdapter)(nil)
type logAdapter struct {
level int
log log.Logger
}
func newLogAdapter(level int) *logAdapter {
return &logAdapter{log: log.New("grafana-apiserver"), level: level}
}
func (l *logAdapter) WithName(name string) logr.LogSink {
l.log = l.log.New("name", name)
return l
}
func (l *logAdapter) WithValues(keysAndValues ...any) logr.LogSink {
l.log = l.log.New(keysAndValues...)
return l
}
func (l *logAdapter) Init(_ logr.RuntimeInfo) {
// we aren't using the logr library for logging, so this is a no-op
}
func (l *logAdapter) Enabled(level int) bool {
return level <= l.level
}
func (l *logAdapter) Info(level int, msg string, keysAndValues ...any) {
msg = strings.TrimSpace(msg)
// kubernetes uses level 0 for critical messages, so map that to Info
if level == 0 {
l.log.Info(msg, keysAndValues...)
return
}
// every other level is mapped to Debug
l.log.Debug(msg, keysAndValues...)
}
func (l *logAdapter) Error(err error, msg string, keysAndValues ...any) {
msg = strings.TrimSpace(msg)
l.log.Error(msg, keysAndValues...)
}

View File

@@ -0,0 +1,126 @@
package options
import (
"net"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apiserver/pkg/endpoints/discovery/aggregated"
genericapiserver "k8s.io/apiserver/pkg/server"
genericoptions "k8s.io/apiserver/pkg/server/options"
)
const defaultEtcdPathPrefix = "/registry/grafana.app"
type Options struct {
RecommendedOptions *genericoptions.RecommendedOptions
AggregatorOptions *AggregatorServerOptions
StorageOptions *StorageOptions
ExtraOptions *ExtraOptions
}
func NewOptions(codec runtime.Codec) *Options {
return &Options{
RecommendedOptions: genericoptions.NewRecommendedOptions(
defaultEtcdPathPrefix,
codec,
),
AggregatorOptions: NewAggregatorServerOptions(),
StorageOptions: NewStorageOptions(),
ExtraOptions: NewExtraOptions(),
}
}
func (o *Options) AddFlags(fs *pflag.FlagSet) {
o.RecommendedOptions.AddFlags(fs)
o.AggregatorOptions.AddFlags(fs)
o.StorageOptions.AddFlags(fs)
o.ExtraOptions.AddFlags(fs)
}
func (o *Options) Validate() []error {
if errs := o.ExtraOptions.Validate(); len(errs) != 0 {
return errs
}
if errs := o.StorageOptions.Validate(); len(errs) != 0 {
return errs
}
if errs := o.AggregatorOptions.Validate(); len(errs) != 0 {
return errs
}
if errs := o.RecommendedOptions.SecureServing.Validate(); len(errs) != 0 {
return errs
}
if errs := o.RecommendedOptions.Authentication.Validate(); len(errs) != 0 {
return errs
}
if o.StorageOptions.StorageType == StorageTypeEtcd {
if errs := o.RecommendedOptions.Etcd.Validate(); len(errs) != 0 {
return errs
}
}
return nil
}
func (o *Options) ApplyTo(serverConfig *genericapiserver.RecommendedConfig) error {
serverConfig.AggregatedDiscoveryGroupManager = aggregated.NewResourceManager("apis")
if err := o.ExtraOptions.ApplyTo(serverConfig); err != nil {
return err
}
if !o.ExtraOptions.DevMode {
o.RecommendedOptions.SecureServing.Listener = newFakeListener()
}
if err := o.RecommendedOptions.SecureServing.ApplyTo(&serverConfig.SecureServing, &serverConfig.LoopbackClientConfig); err != nil {
return err
}
if err := o.RecommendedOptions.Authentication.ApplyTo(&serverConfig.Authentication, serverConfig.SecureServing, serverConfig.OpenAPIConfig); err != nil {
return err
}
if !o.ExtraOptions.DevMode {
if err := serverConfig.SecureServing.Listener.Close(); err != nil {
return err
}
serverConfig.SecureServing = nil
}
return nil
}
type fakeListener struct {
server net.Conn
client net.Conn
}
func newFakeListener() *fakeListener {
server, client := net.Pipe()
return &fakeListener{
server: server,
client: client,
}
}
func (f *fakeListener) Accept() (net.Conn, error) {
return f.server, nil
}
func (f *fakeListener) Close() error {
if err := f.client.Close(); err != nil {
return err
}
return f.server.Close()
}
func (f *fakeListener) Addr() net.Addr {
return &net.TCPAddr{IP: net.IPv4(127, 0, 0, 1), Port: 3000, Zone: ""}
}

View File

@@ -0,0 +1,51 @@
package options
import (
"fmt"
"github.com/spf13/pflag"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/options"
)
type StorageType string
const (
StorageTypeFile StorageType = "file"
StorageTypeEtcd StorageType = "etcd"
StorageTypeLegacy StorageType = "legacy"
StorageTypeUnified StorageType = "unified"
StorageTypeUnifiedGrpc StorageType = "unified-grpc"
)
type StorageOptions struct {
StorageType StorageType
DataPath string
}
func NewStorageOptions() *StorageOptions {
return &StorageOptions{
StorageType: StorageTypeLegacy,
}
}
func (o *StorageOptions) AddFlags(fs *pflag.FlagSet) {
fs.StringVar((*string)(&o.StorageType), "grafana-apiserver-storage-type", string(o.StorageType), "Storage type")
fs.StringVar((*string)(&o.StorageType), "grafana-apiserver-storage-path", string(o.StorageType), "Storage path for file storage")
}
func (o *StorageOptions) Validate() []error {
errs := []error{}
switch o.StorageType {
case StorageTypeFile, StorageTypeEtcd, StorageTypeLegacy, StorageTypeUnified, StorageTypeUnifiedGrpc:
// no-op
default:
errs = append(errs, fmt.Errorf("--grafana-apiserver-storage-type must be one of %s, %s, %s, %s, %s", StorageTypeFile, StorageTypeEtcd, StorageTypeLegacy, StorageTypeUnified, StorageTypeUnifiedGrpc))
}
return errs
}
func (o *StorageOptions) ApplyTo(serverConfig *genericapiserver.RecommendedConfig, etcdOptions *options.EtcdOptions) error {
// TODO: move storage setup here
return nil
}

View File

@@ -0,0 +1,79 @@
package generic
import (
"context"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/names"
)
type genericStrategy struct {
runtime.ObjectTyper
names.NameGenerator
}
// NewStrategy creates and returns a genericStrategy instance.
func NewStrategy(typer runtime.ObjectTyper) genericStrategy {
return genericStrategy{typer, names.SimpleNameGenerator}
}
// NamespaceScoped returns true because all Generic resources must be within a namespace.
func (genericStrategy) NamespaceScoped() bool {
return true
}
func (genericStrategy) PrepareForCreate(ctx context.Context, obj runtime.Object) {}
func (genericStrategy) PrepareForUpdate(ctx context.Context, obj, old runtime.Object) {}
func (genericStrategy) Validate(ctx context.Context, obj runtime.Object) field.ErrorList {
return field.ErrorList{}
}
// WarningsOnCreate returns warnings for the creation of the given object.
func (genericStrategy) WarningsOnCreate(ctx context.Context, obj runtime.Object) []string { return nil }
func (genericStrategy) AllowCreateOnUpdate() bool {
return true
}
func (genericStrategy) AllowUnconditionalUpdate() bool {
return true
}
func (genericStrategy) Canonicalize(obj runtime.Object) {}
func (genericStrategy) ValidateUpdate(ctx context.Context, obj, old runtime.Object) field.ErrorList {
return field.ErrorList{}
}
// WarningsOnUpdate returns warnings for the given update.
func (genericStrategy) WarningsOnUpdate(ctx context.Context, obj, old runtime.Object) []string {
return nil
}
// GetAttrs returns labels and fields of an object.
func GetAttrs(obj runtime.Object) (labels.Set, fields.Set, error) {
accessor, err := meta.Accessor(obj)
if err != nil {
return nil, nil, err
}
fieldsSet := fields.Set{
"metadata.name": accessor.GetName(),
}
return labels.Set(accessor.GetLabels()), fieldsSet, nil
}
// Matcher returns a generic.SelectionPredicate that matches on label and field selectors.
func Matcher(label labels.Selector, field fields.Selector) storage.SelectionPredicate {
return storage.SelectionPredicate{
Label: label,
Field: field,
GetAttrs: GetAttrs,
}
}

View File

@@ -0,0 +1,195 @@
package rest
import (
"context"
"k8s.io/apimachinery/pkg/api/meta"
metainternalversion "k8s.io/apimachinery/pkg/apis/meta/internalversion"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apiserver/pkg/registry/rest"
"github.com/grafana/grafana/pkg/infra/log"
)
var (
_ rest.Storage = (*DualWriter)(nil)
_ rest.Scoper = (*DualWriter)(nil)
_ rest.TableConvertor = (*DualWriter)(nil)
_ rest.CreaterUpdater = (*DualWriter)(nil)
_ rest.CollectionDeleter = (*DualWriter)(nil)
_ rest.GracefulDeleter = (*DualWriter)(nil)
_ rest.SingularNameProvider = (*DualWriter)(nil)
)
// Storage is a storage implementation that satisfies the same interfaces as genericregistry.Store.
type Storage interface {
rest.Storage
rest.StandardStorage
rest.Scoper
rest.TableConvertor
rest.SingularNameProvider
rest.Getter
}
// LegacyStorage is a storage implementation that writes to the Grafana SQL database.
type LegacyStorage interface {
rest.Storage
rest.Scoper
rest.SingularNameProvider
rest.TableConvertor
rest.Getter
}
// DualWriter is a storage implementation that writes first to LegacyStorage and then to Storage.
// If writing to LegacyStorage fails, the write to Storage is skipped and the error is returned.
// Storage is used for all read operations. This is useful as a migration step from SQL based
// legacy storage to a more standard kubernetes backed storage interface.
//
// NOTE: Only values supported by legacy storage will be preserved in the CREATE/UPDATE commands.
// For example, annotations, labels, and managed fields may not be preserved. Everything in upstream
// storage can be recrated from the data in legacy storage.
//
// The LegacyStorage implementation must implement the following interfaces:
// - rest.Storage
// - rest.TableConvertor
// - rest.Scoper
// - rest.SingularNameProvider
//
// These interfaces are optional, but they all should be implemented to fully support dual writes:
// - rest.Creater
// - rest.Updater
// - rest.GracefulDeleter
// - rest.CollectionDeleter
type DualWriter struct {
Storage
legacy LegacyStorage
log log.Logger
}
// NewDualWriter returns a new DualWriter.
func NewDualWriter(legacy LegacyStorage, storage Storage) *DualWriter {
return &DualWriter{
Storage: storage,
legacy: legacy,
log: log.New("grafana-apiserver.dualwriter"),
}
}
// Create overrides the default behavior of the Storage and writes to both the LegacyStorage and Storage.
func (d *DualWriter) Create(ctx context.Context, obj runtime.Object, createValidation rest.ValidateObjectFunc, options *metav1.CreateOptions) (runtime.Object, error) {
if legacy, ok := d.legacy.(rest.Creater); ok {
created, err := legacy.Create(ctx, obj, createValidation, options)
if err != nil {
return nil, err
}
accessor, err := meta.Accessor(created)
if err != nil {
return created, err
}
accessor.SetResourceVersion("")
accessor.SetUID("")
rsp, err := d.Storage.Create(ctx, created, createValidation, options)
if err != nil {
d.log.Error("unable to create object in duplicate storage", "error", err)
}
return rsp, err
}
return d.Storage.Create(ctx, obj, createValidation, options)
}
// Update overrides the default behavior of the Storage and writes to both the LegacyStorage and Storage.
func (d *DualWriter) Update(ctx context.Context, name string, objInfo rest.UpdatedObjectInfo, createValidation rest.ValidateObjectFunc, updateValidation rest.ValidateObjectUpdateFunc, forceAllowCreate bool, options *metav1.UpdateOptions) (runtime.Object, bool, error) {
if legacy, ok := d.legacy.(rest.Updater); ok {
// Get the previous version from k8s storage (the one)
old, err := d.Get(ctx, name, &metav1.GetOptions{})
if err != nil {
return nil, false, err
}
accessor, err := meta.Accessor(old)
if err != nil {
return nil, false, err
}
// Hold on to the RV+UID for the dual write
theRV := accessor.GetResourceVersion()
theUID := accessor.GetUID()
// Changes applied within new storage
// will fail if RV is out of sync
updated, err := objInfo.UpdatedObject(ctx, old)
if err != nil {
return nil, false, err
}
accessor, err = meta.Accessor(updated)
if err != nil {
return nil, false, err
}
accessor.SetUID("") // clear it
accessor.SetResourceVersion("") // remove it so it is not a constraint
obj, created, err := legacy.Update(ctx, name, &updateWrapper{
upstream: objInfo,
updated: updated, // returned as the object that will be updated
}, createValidation, updateValidation, forceAllowCreate, options)
if err != nil {
return obj, created, err
}
accessor, err = meta.Accessor(obj)
if err != nil {
return nil, false, err
}
accessor.SetResourceVersion(theRV) // the original RV
accessor.SetUID(theUID)
objInfo = &updateWrapper{
upstream: objInfo,
updated: obj, // returned as the object that will be updated
}
}
return d.Storage.Update(ctx, name, objInfo, createValidation, updateValidation, forceAllowCreate, options)
}
// Delete overrides the default behavior of the Storage and delete from both the LegacyStorage and Storage.
func (d *DualWriter) Delete(ctx context.Context, name string, deleteValidation rest.ValidateObjectFunc, options *metav1.DeleteOptions) (runtime.Object, bool, error) {
// Delete from storage *first* so the item is still exists if a failure happens
obj, async, err := d.Storage.Delete(ctx, name, deleteValidation, options)
if err == nil {
if legacy, ok := d.legacy.(rest.GracefulDeleter); ok {
obj, async, err = legacy.Delete(ctx, name, deleteValidation, options)
}
}
return obj, async, err
}
// DeleteCollection overrides the default behavior of the Storage and delete from both the LegacyStorage and Storage.
func (d *DualWriter) DeleteCollection(ctx context.Context, deleteValidation rest.ValidateObjectFunc, options *metav1.DeleteOptions, listOptions *metainternalversion.ListOptions) (runtime.Object, error) {
out, err := d.Storage.DeleteCollection(ctx, deleteValidation, options, listOptions)
if err == nil {
if legacy, ok := d.legacy.(rest.CollectionDeleter); ok {
out, err = legacy.DeleteCollection(ctx, deleteValidation, options, listOptions)
}
}
return out, err
}
type updateWrapper struct {
upstream rest.UpdatedObjectInfo
updated runtime.Object
}
// Returns preconditions built from the updated object, if applicable.
// May return nil, or a preconditions object containing nil fields,
// if no preconditions can be determined from the updated object.
func (u *updateWrapper) Preconditions() *metav1.Preconditions {
return u.upstream.Preconditions()
}
// UpdatedObject returns the updated object, given a context and old object.
// The only time an empty oldObj should be passed in is if a "create on update" is occurring (there is no oldObj).
func (u *updateWrapper) UpdatedObject(ctx context.Context, oldObj runtime.Object) (newObj runtime.Object, err error) {
return u.updated, nil
}

View File

@@ -0,0 +1,399 @@
package apiserver
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"path"
"github.com/grafana/dskit/services"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apiserver/pkg/endpoints/responsewriter"
genericapiserver "k8s.io/apiserver/pkg/server"
clientrest "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/infra/appcontext"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/middleware"
"github.com/grafana/grafana/pkg/modules"
"github.com/grafana/grafana/pkg/registry"
"github.com/grafana/grafana/pkg/services/apiserver/auth/authorizer"
"github.com/grafana/grafana/pkg/services/apiserver/builder"
grafanaapiserveroptions "github.com/grafana/grafana/pkg/services/apiserver/options"
entitystorage "github.com/grafana/grafana/pkg/services/apiserver/storage/entity"
filestorage "github.com/grafana/grafana/pkg/services/apiserver/storage/file"
"github.com/grafana/grafana/pkg/services/apiserver/utils"
contextmodel "github.com/grafana/grafana/pkg/services/contexthandler/model"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/store/entity"
"github.com/grafana/grafana/pkg/services/store/entity/db/dbimpl"
"github.com/grafana/grafana/pkg/services/store/entity/sqlstash"
"github.com/grafana/grafana/pkg/setting"
)
var (
_ Service = (*service)(nil)
_ RestConfigProvider = (*service)(nil)
_ registry.BackgroundService = (*service)(nil)
_ registry.CanBeDisabled = (*service)(nil)
Scheme = runtime.NewScheme()
Codecs = serializer.NewCodecFactory(Scheme)
unversionedVersion = schema.GroupVersion{Group: "", Version: "v1"}
unversionedTypes = []runtime.Object{
&metav1.Status{},
&metav1.WatchEvent{},
&metav1.APIVersions{},
&metav1.APIGroupList{},
&metav1.APIGroup{},
&metav1.APIResourceList{},
}
)
func init() {
// we need to add the options to empty v1
metav1.AddToGroupVersion(Scheme, schema.GroupVersion{Group: "", Version: "v1"})
Scheme.AddUnversionedTypes(unversionedVersion, unversionedTypes...)
}
type Service interface {
services.NamedService
registry.BackgroundService
registry.CanBeDisabled
}
type RestConfigProvider interface {
GetRestConfig() *clientrest.Config
}
type DirectRestConfigProvider interface {
// GetDirectRestConfig returns a k8s client configuration that will use the same
// logged logged in user as the current request context. This is useful when
// creating clients that map legacy API handlers to k8s backed services
GetDirectRestConfig(c *contextmodel.ReqContext) *clientrest.Config
// This can be used to rewrite incoming requests to path now supported under /apis
DirectlyServeHTTP(w http.ResponseWriter, r *http.Request)
}
type service struct {
*services.BasicService
options *grafanaapiserveroptions.Options
restConfig *clientrest.Config
cfg *setting.Cfg
features featuremgmt.FeatureToggles
stopCh chan struct{}
stoppedCh chan error
db db.DB
rr routing.RouteRegister
handler http.Handler
builders []builder.APIGroupBuilder
tracing *tracing.TracingService
authorizer *authorizer.GrafanaAuthorizer
}
func ProvideService(
cfg *setting.Cfg,
features featuremgmt.FeatureToggles,
rr routing.RouteRegister,
orgService org.Service,
tracing *tracing.TracingService,
db db.DB,
) (*service, error) {
s := &service{
cfg: cfg,
features: features,
rr: rr,
stopCh: make(chan struct{}),
builders: []builder.APIGroupBuilder{},
authorizer: authorizer.NewGrafanaAuthorizer(cfg, orgService),
tracing: tracing,
db: db, // For Unified storage
}
// This will be used when running as a dskit service
s.BasicService = services.NewBasicService(s.start, s.running, nil).WithName(modules.GrafanaAPIServer)
// TODO: this is very hacky
// We need to register the routes in ProvideService to make sure
// the routes are registered before the Grafana HTTP server starts.
proxyHandler := func(k8sRoute routing.RouteRegister) {
handler := func(c *contextmodel.ReqContext) {
if s.handler == nil {
c.Resp.WriteHeader(404)
_, _ = c.Resp.Write([]byte("Not found"))
return
}
req := c.Req
if req.URL.Path == "" {
req.URL.Path = "/"
}
// TODO: add support for the existing MetricsEndpointBasicAuth config option
if req.URL.Path == "/apiserver-metrics" {
req.URL.Path = "/metrics"
}
resp := responsewriter.WrapForHTTP1Or2(c.Resp)
s.handler.ServeHTTP(resp, req)
}
k8sRoute.Any("/", middleware.ReqSignedIn, handler)
k8sRoute.Any("/*", middleware.ReqSignedIn, handler)
}
s.rr.Group("/apis", proxyHandler)
s.rr.Group("/livez", proxyHandler)
s.rr.Group("/readyz", proxyHandler)
s.rr.Group("/healthz", proxyHandler)
s.rr.Group("/openapi", proxyHandler)
return s, nil
}
func (s *service) GetRestConfig() *clientrest.Config {
return s.restConfig
}
func (s *service) IsDisabled() bool {
return false
}
// Run is an adapter for the BackgroundService interface.
func (s *service) Run(ctx context.Context) error {
if err := s.start(ctx); err != nil {
return err
}
return s.running(ctx)
}
func (s *service) RegisterAPI(b builder.APIGroupBuilder) {
s.builders = append(s.builders, b)
}
func (s *service) start(ctx context.Context) error {
// Get the list of groups the server will support
builders := s.builders
groupVersions := make([]schema.GroupVersion, 0, len(builders))
// Install schemas
for _, b := range builders {
groupVersions = append(groupVersions, b.GetGroupVersion())
if err := b.InstallSchema(Scheme); err != nil {
return err
}
auth := b.GetAuthorizer()
if auth != nil {
s.authorizer.Register(b.GetGroupVersion(), auth)
}
}
o := grafanaapiserveroptions.NewOptions(Codecs.LegacyCodec(groupVersions...))
applyGrafanaConfig(s.cfg, s.features, o)
if errs := o.Validate(); len(errs) != 0 {
// TODO: handle multiple errors
return errs[0]
}
serverConfig := genericapiserver.NewRecommendedConfig(Codecs)
if err := o.ApplyTo(serverConfig); err != nil {
return err
}
serverConfig.Authorization.Authorizer = s.authorizer
serverConfig.TracerProvider = s.tracing.GetTracerProvider()
// setup loopback transport
transport := &roundTripperFunc{ready: make(chan struct{})}
serverConfig.LoopbackClientConfig.Transport = transport
serverConfig.LoopbackClientConfig.TLSClientConfig = clientrest.TLSClientConfig{}
switch o.StorageOptions.StorageType {
case grafanaapiserveroptions.StorageTypeEtcd:
if err := o.RecommendedOptions.Etcd.Validate(); len(err) > 0 {
return err[0]
}
if err := o.RecommendedOptions.Etcd.ApplyTo(&serverConfig.Config); err != nil {
return err
}
case grafanaapiserveroptions.StorageTypeUnified:
if !s.features.IsEnabledGlobally(featuremgmt.FlagUnifiedStorage) {
return fmt.Errorf("unified storage requires the unifiedStorage feature flag (and app_mode = development)")
}
eDB, err := dbimpl.ProvideEntityDB(s.db, s.cfg, s.features)
if err != nil {
return err
}
store, err := sqlstash.ProvideSQLEntityServer(eDB)
if err != nil {
return err
}
serverConfig.Config.RESTOptionsGetter = entitystorage.NewRESTOptionsGetter(s.cfg, store, o.RecommendedOptions.Etcd.StorageConfig.Codec)
case grafanaapiserveroptions.StorageTypeUnifiedGrpc:
// Create a connection to the gRPC server
// TODO: support configuring the gRPC server address
conn, err := grpc.Dial("localhost:10000", grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
return err
}
// TODO: determine when to close the connection, we cannot defer it here
// defer conn.Close()
// Create a client instance
store := entity.NewEntityStoreClientWrapper(conn)
serverConfig.Config.RESTOptionsGetter = entitystorage.NewRESTOptionsGetter(s.cfg, store, o.RecommendedOptions.Etcd.StorageConfig.Codec)
case grafanaapiserveroptions.StorageTypeLegacy:
fallthrough
case grafanaapiserveroptions.StorageTypeFile:
serverConfig.RESTOptionsGetter = filestorage.NewRESTOptionsGetter(o.StorageOptions.DataPath, o.RecommendedOptions.Etcd.StorageConfig)
}
// Add OpenAPI specs for each group+version
err := builder.SetupConfig(Scheme, serverConfig, builders)
if err != nil {
return err
}
// support folder selection
err = entitystorage.RegisterFieldSelectorSupport(Scheme)
if err != nil {
return err
}
// Create the server
server, err := serverConfig.Complete().New("grafana-apiserver", genericapiserver.NewEmptyDelegate())
if err != nil {
return err
}
dualWriteEnabled := o.StorageOptions.StorageType != grafanaapiserveroptions.StorageTypeLegacy
// Install the API Group+version
err = builder.InstallAPIs(Scheme, Codecs, server, serverConfig.RESTOptionsGetter, builders, dualWriteEnabled)
if err != nil {
return err
}
// set the transport function and signal that it's ready
transport.fn = func(req *http.Request) (*http.Response, error) {
w := newWrappedResponseWriter()
resp := responsewriter.WrapForHTTP1Or2(w)
server.Handler.ServeHTTP(resp, req)
return w.Result(), nil
}
close(transport.ready)
// only write kubeconfig in dev mode
if o.ExtraOptions.DevMode {
if err := ensureKubeConfig(server.LoopbackClientConfig, o.StorageOptions.DataPath); err != nil {
return err
}
}
// Used by the proxy wrapper registered in ProvideService
s.handler = server.Handler
s.restConfig = server.LoopbackClientConfig
s.options = o
prepared := server.PrepareRun()
go func() {
s.stoppedCh <- prepared.Run(s.stopCh)
}()
return nil
}
func (s *service) GetDirectRestConfig(c *contextmodel.ReqContext) *clientrest.Config {
return &clientrest.Config{
Transport: &roundTripperFunc{
fn: func(req *http.Request) (*http.Response, error) {
ctx := appcontext.WithUser(req.Context(), c.SignedInUser)
w := httptest.NewRecorder()
s.handler.ServeHTTP(w, req.WithContext(ctx))
return w.Result(), nil
},
},
}
}
func (s *service) DirectlyServeHTTP(w http.ResponseWriter, r *http.Request) {
s.handler.ServeHTTP(w, r)
}
func (s *service) running(ctx context.Context) error {
select {
case err := <-s.stoppedCh:
if err != nil {
return err
}
case <-ctx.Done():
close(s.stopCh)
}
return nil
}
func ensureKubeConfig(restConfig *clientrest.Config, dir string) error {
return clientcmd.WriteToFile(
utils.FormatKubeConfig(restConfig),
path.Join(dir, "grafana.kubeconfig"),
)
}
type roundTripperFunc struct {
ready chan struct{}
fn func(req *http.Request) (*http.Response, error)
}
func (f *roundTripperFunc) RoundTrip(req *http.Request) (*http.Response, error) {
if f.fn == nil {
<-f.ready
}
return f.fn(req)
}
var _ http.ResponseWriter = (*wrappedResponseWriter)(nil)
var _ responsewriter.UserProvidedDecorator = (*wrappedResponseWriter)(nil)
type wrappedResponseWriter struct {
*httptest.ResponseRecorder
}
func newWrappedResponseWriter() *wrappedResponseWriter {
w := httptest.NewRecorder()
return &wrappedResponseWriter{w}
}
func (w *wrappedResponseWriter) Unwrap() http.ResponseWriter {
return w.ResponseRecorder
}
func (w *wrappedResponseWriter) CloseNotify() <-chan bool {
// TODO: this is probably not the right thing to do here
return make(<-chan bool)
}

View File

@@ -0,0 +1,76 @@
package entity
import (
"fmt"
"strings"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/selection"
)
const folderAnnoKey = "grafana.app/folder"
type FieldRequirements struct {
// Equals folder
Folder *string
}
func ReadFieldRequirements(selector fields.Selector) (FieldRequirements, fields.Selector, error) {
requirements := FieldRequirements{}
if selector == nil {
return requirements, selector, nil
}
for _, r := range selector.Requirements() {
switch r.Field {
case folderAnnoKey:
if (r.Operator != selection.Equals) && (r.Operator != selection.DoubleEquals) {
return requirements, selector, apierrors.NewBadRequest("only equality is supported in the selectors")
}
folder := r.Value
requirements.Folder = &folder
}
}
// use Transform function to remove grafana.app/folder field selector
selector, err := selector.Transform(func(field, value string) (string, string, error) {
switch field {
case folderAnnoKey:
return "", "", nil
}
return field, value, nil
})
return requirements, selector, err
}
func RegisterFieldSelectorSupport(scheme *runtime.Scheme) error {
grafanaFieldSupport := runtime.FieldLabelConversionFunc(
func(field, value string) (string, string, error) {
if strings.HasPrefix(field, "grafana.app/") {
return field, value, nil
}
return "", "", getBadSelectorError(field)
},
)
// Register all the internal types
for gvk := range scheme.AllKnownTypes() {
if strings.HasSuffix(gvk.Group, ".grafana.app") {
err := scheme.AddFieldLabelConversionFunc(gvk, grafanaFieldSupport)
if err != nil {
return err
}
}
}
return nil
}
func getBadSelectorError(f string) error {
return apierrors.NewBadRequest(
fmt.Sprintf("%q is not a known field selector: only %q works", f, folderAnnoKey),
)
}

View File

@@ -0,0 +1,90 @@
// SPDX-License-Identifier: AGPL-3.0-only
package entity
import (
"encoding/json"
"path"
"time"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/registry/generic"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/storagebackend"
"k8s.io/apiserver/pkg/storage/storagebackend/factory"
flowcontrolrequest "k8s.io/apiserver/pkg/util/flowcontrol/request"
"k8s.io/client-go/tools/cache"
entityStore "github.com/grafana/grafana/pkg/services/store/entity"
"github.com/grafana/grafana/pkg/setting"
)
var _ generic.RESTOptionsGetter = (*RESTOptionsGetter)(nil)
type RESTOptionsGetter struct {
cfg *setting.Cfg
store entityStore.EntityStoreServer
Codec runtime.Codec
}
func NewRESTOptionsGetter(cfg *setting.Cfg, store entityStore.EntityStoreServer, codec runtime.Codec) *RESTOptionsGetter {
return &RESTOptionsGetter{
cfg: cfg,
store: store,
Codec: codec,
}
}
func (f *RESTOptionsGetter) GetRESTOptions(resource schema.GroupResource) (generic.RESTOptions, error) {
// build connection string to uniquely identify the storage backend
connectionInfo, err := json.Marshal(f.cfg.SectionWithEnvOverrides("entity_api").KeysHash())
if err != nil {
return generic.RESTOptions{}, err
}
storageConfig := &storagebackend.ConfigForResource{
Config: storagebackend.Config{
Type: "custom",
Prefix: "",
Transport: storagebackend.TransportConfig{
ServerList: []string{
string(connectionInfo),
},
},
Codec: f.Codec,
EncodeVersioner: nil,
Transformer: nil,
CompactionInterval: 0,
CountMetricPollPeriod: 0,
DBMetricPollInterval: 0,
HealthcheckTimeout: 0,
ReadycheckTimeout: 0,
StorageObjectCountTracker: nil,
},
GroupResource: resource,
}
ret := generic.RESTOptions{
StorageConfig: storageConfig,
Decorator: func(
config *storagebackend.ConfigForResource,
resourcePrefix string,
keyFunc func(obj runtime.Object) (string, error),
newFunc func() runtime.Object,
newListFunc func() runtime.Object,
getAttrsFunc storage.AttrFunc,
trigger storage.IndexerFuncs,
indexers *cache.Indexers,
) (storage.Interface, factory.DestroyFunc, error) {
return NewStorage(config, resource, f.store, f.Codec, keyFunc, newFunc, newListFunc, getAttrsFunc)
},
DeleteCollectionWorkers: 0,
EnableGarbageCollection: false,
ResourcePrefix: path.Join(storageConfig.Prefix, resource.Group, resource.Resource),
CountMetricPollPeriod: 1 * time.Second,
StorageObjectCountTracker: flowcontrolrequest.NewStorageObjectCountTracker(),
}
return ret, nil
}

View File

@@ -0,0 +1,412 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/kubernetes-sigs/apiserver-runtime/blob/main/pkg/experimental/storage/filepath/jsonfile_rest.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
package entity
import (
"context"
"errors"
"fmt"
"reflect"
"strconv"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/conversion"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/storagebackend"
"k8s.io/apiserver/pkg/storage/storagebackend/factory"
entityStore "github.com/grafana/grafana/pkg/services/store/entity"
"github.com/grafana/grafana/pkg/util"
)
var _ storage.Interface = (*Storage)(nil)
const MaxUpdateAttempts = 1
// Storage implements storage.Interface and storage resources as JSON files on disk.
type Storage struct {
config *storagebackend.ConfigForResource
store entityStore.EntityStoreServer
gr schema.GroupResource
codec runtime.Codec
keyFunc func(obj runtime.Object) (string, error)
newFunc func() runtime.Object
newListFunc func() runtime.Object
getAttrsFunc storage.AttrFunc
// trigger storage.IndexerFuncs
// indexers *cache.Indexers
// watchSet *WatchSet
}
func NewStorage(
config *storagebackend.ConfigForResource,
gr schema.GroupResource,
store entityStore.EntityStoreServer,
codec runtime.Codec,
keyFunc func(obj runtime.Object) (string, error),
newFunc func() runtime.Object,
newListFunc func() runtime.Object,
getAttrsFunc storage.AttrFunc,
) (storage.Interface, factory.DestroyFunc, error) {
return &Storage{
config: config,
gr: gr,
codec: codec,
store: store,
keyFunc: keyFunc,
newFunc: newFunc,
newListFunc: newListFunc,
getAttrsFunc: getAttrsFunc,
}, nil, nil
}
// Create adds a new object at a key unless it already exists. 'ttl' is time-to-live
// in seconds (0 means forever). If no error is returned and out is not nil, out will be
// set to the read value from database.
func (s *Storage) Create(ctx context.Context, key string, obj runtime.Object, out runtime.Object, ttl uint64) error {
requestInfo, ok := request.RequestInfoFrom(ctx)
if !ok {
return apierrors.NewInternalError(fmt.Errorf("could not get request info"))
}
if err := s.Versioner().PrepareObjectForStorage(obj); err != nil {
return err
}
metaAccessor, err := meta.Accessor(obj)
if err != nil {
return err
}
// Replace the default name generation strategy
if metaAccessor.GetGenerateName() != "" {
k, err := entityStore.ParseKey(key)
if err != nil {
return err
}
k.Name = util.GenerateShortUID()
key = k.String()
metaAccessor.SetName(k.Name)
metaAccessor.SetGenerateName("")
}
e, err := resourceToEntity(key, obj, requestInfo, s.codec)
if err != nil {
return err
}
req := &entityStore.CreateEntityRequest{
Entity: e,
}
rsp, err := s.store.Create(ctx, req)
if err != nil {
return err
}
if rsp.Status != entityStore.CreateEntityResponse_CREATED {
return fmt.Errorf("this was not a create operation... (%s)", rsp.Status.String())
}
err = entityToResource(rsp.Entity, out, s.codec)
if err != nil {
return apierrors.NewInternalError(err)
}
/*
s.watchSet.notifyWatchers(watch.Event{
Object: out.DeepCopyObject(),
Type: watch.Added,
})
*/
return nil
}
// Delete removes the specified key and returns the value that existed at that spot.
// If key didn't exist, it will return NotFound storage error.
// If 'cachedExistingObject' is non-nil, it can be used as a suggestion about the
// current version of the object to avoid read operation from storage to get it.
// However, the implementations have to retry in case suggestion is stale.
func (s *Storage) Delete(ctx context.Context, key string, out runtime.Object, preconditions *storage.Preconditions, validateDeletion storage.ValidateObjectFunc, cachedExistingObject runtime.Object) error {
previousVersion := int64(0)
if preconditions != nil && preconditions.ResourceVersion != nil {
previousVersion, _ = strconv.ParseInt(*preconditions.ResourceVersion, 10, 64)
}
rsp, err := s.store.Delete(ctx, &entityStore.DeleteEntityRequest{
Key: key,
PreviousVersion: previousVersion,
})
if err != nil {
return err
}
err = entityToResource(rsp.Entity, out, s.codec)
if err != nil {
return apierrors.NewInternalError(err)
}
return nil
}
// Watch begins watching the specified key. Events are decoded into API objects,
// and any items selected by 'p' are sent down to returned watch.Interface.
// resourceVersion may be used to specify what version to begin watching,
// which should be the current resourceVersion, and no longer rv+1
// (e.g. reconnecting without missing any updates).
// If resource version is "0", this interface will get current object at given key
// and send it in an "ADDED" event, before watch starts.
func (s *Storage) Watch(ctx context.Context, key string, opts storage.ListOptions) (watch.Interface, error) {
return nil, apierrors.NewMethodNotSupported(schema.GroupResource{}, "watch")
}
// Get unmarshals object found at key into objPtr. On a not found error, will either
// return a zero object of the requested type, or an error, depending on 'opts.ignoreNotFound'.
// Treats empty responses and nil response nodes exactly like a not found error.
// The returned contents may be delayed, but it is guaranteed that they will
// match 'opts.ResourceVersion' according 'opts.ResourceVersionMatch'.
func (s *Storage) Get(ctx context.Context, key string, opts storage.GetOptions, objPtr runtime.Object) error {
rsp, err := s.store.Read(ctx, &entityStore.ReadEntityRequest{
Key: key,
WithBody: true,
WithStatus: true,
})
if err != nil {
return err
}
if rsp.Key == "" {
if opts.IgnoreNotFound {
return nil
}
return apierrors.NewNotFound(s.gr, key)
}
err = entityToResource(rsp, objPtr, s.codec)
if err != nil {
return apierrors.NewInternalError(err)
}
return nil
}
// GetList unmarshalls objects found at key into a *List api object (an object
// that satisfies runtime.IsList definition).
// If 'opts.Recursive' is false, 'key' is used as an exact match. If `opts.Recursive'
// is true, 'key' is used as a prefix.
// The returned contents may be delayed, but it is guaranteed that they will
// match 'opts.ResourceVersion' according 'opts.ResourceVersionMatch'.
func (s *Storage) GetList(ctx context.Context, key string, opts storage.ListOptions, listObj runtime.Object) error {
listPtr, err := meta.GetItemsPtr(listObj)
if err != nil {
return err
}
v, err := conversion.EnforcePtr(listPtr)
if err != nil {
return err
}
req := &entityStore.EntityListRequest{
Key: []string{key},
WithBody: true,
WithStatus: true,
NextPageToken: opts.Predicate.Continue,
Limit: opts.Predicate.Limit,
Labels: map[string]string{},
// TODO push label/field matching down to storage
}
// translate "equals" label selectors to storage label conditions
requirements, selectable := opts.Predicate.Label.Requirements()
if !selectable {
return apierrors.NewBadRequest("label selector is not selectable")
}
for _, r := range requirements {
if r.Operator() == selection.Equals {
req.Labels[r.Key()] = r.Values().List()[0]
}
}
// translate grafana.app/folder field selector to the folder condition
fieldRequirements, fieldSelector, err := ReadFieldRequirements(opts.Predicate.Field)
if err != nil {
return err
}
if fieldRequirements.Folder != nil {
req.Folder = *fieldRequirements.Folder
}
// Update the field selector to remove the unneeded selectors
opts.Predicate.Field = fieldSelector
rsp, err := s.store.List(ctx, req)
if err != nil {
return apierrors.NewInternalError(err)
}
for _, r := range rsp.Results {
res := s.newFunc()
err := entityToResource(r, res, s.codec)
if err != nil {
return apierrors.NewInternalError(err)
}
// TODO filter in storage
matches, err := opts.Predicate.Matches(res)
if err != nil {
return apierrors.NewInternalError(err)
}
if !matches {
continue
}
v.Set(reflect.Append(v, reflect.ValueOf(res).Elem()))
}
listAccessor, err := meta.ListAccessor(listObj)
if err != nil {
return err
}
if rsp.NextPageToken != "" {
listAccessor.SetContinue(rsp.NextPageToken)
}
return nil
}
// GuaranteedUpdate keeps calling 'tryUpdate()' to update key 'key' (of type 'destination')
// retrying the update until success if there is index conflict.
// Note that object passed to tryUpdate may change across invocations of tryUpdate() if
// other writers are simultaneously updating it, so tryUpdate() needs to take into account
// the current contents of the object when deciding how the update object should look.
// If the key doesn't exist, it will return NotFound storage error if ignoreNotFound=false
// else `destination` will be set to the zero value of it's type.
// If the eventual successful invocation of `tryUpdate` returns an output with the same serialized
// contents as the input, it won't perform any update, but instead set `destination` to an object with those
// contents.
// If 'cachedExistingObject' is non-nil, it can be used as a suggestion about the
// current version of the object to avoid read operation from storage to get it.
// However, the implementations have to retry in case suggestion is stale.
func (s *Storage) GuaranteedUpdate(
ctx context.Context,
key string,
destination runtime.Object,
ignoreNotFound bool,
preconditions *storage.Preconditions,
tryUpdate storage.UpdateFunc,
cachedExistingObject runtime.Object,
) error {
var err error
for attempt := 1; attempt <= MaxUpdateAttempts; attempt = attempt + 1 {
err = s.guaranteedUpdate(ctx, key, destination, ignoreNotFound, preconditions, tryUpdate, cachedExistingObject)
if err == nil {
return nil
}
}
return err
}
func (s *Storage) guaranteedUpdate(
ctx context.Context,
key string,
destination runtime.Object,
ignoreNotFound bool,
preconditions *storage.Preconditions,
tryUpdate storage.UpdateFunc,
cachedExistingObject runtime.Object,
) error {
requestInfo, ok := request.RequestInfoFrom(ctx)
if !ok {
return apierrors.NewInternalError(fmt.Errorf("could not get request info"))
}
err := s.Get(ctx, key, storage.GetOptions{}, destination)
if err != nil {
return err
}
accessor, err := meta.Accessor(destination)
if err != nil {
return err
}
previousVersion, _ := strconv.ParseInt(accessor.GetResourceVersion(), 10, 64)
if preconditions != nil && preconditions.ResourceVersion != nil {
previousVersion, _ = strconv.ParseInt(*preconditions.ResourceVersion, 10, 64)
}
res := &storage.ResponseMeta{}
updatedObj, _, err := tryUpdate(destination, *res)
if err != nil {
var statusErr *apierrors.StatusError
if errors.As(err, &statusErr) {
// For now, forbidden may come from a mutation handler
if statusErr.ErrStatus.Reason == metav1.StatusReasonForbidden {
return statusErr
}
}
return apierrors.NewInternalError(fmt.Errorf("could not successfully update object. key=%s, err=%s", key, err.Error()))
}
e, err := resourceToEntity(key, updatedObj, requestInfo, s.codec)
if err != nil {
return err
}
req := &entityStore.UpdateEntityRequest{
Entity: e,
PreviousVersion: previousVersion,
}
rsp, err := s.store.Update(ctx, req)
if err != nil {
return err // continue???
}
if rsp.Status == entityStore.UpdateEntityResponse_UNCHANGED {
return nil // destination is already set
}
err = entityToResource(rsp.Entity, destination, s.codec)
if err != nil {
return apierrors.NewInternalError(err)
}
/*
s.watchSet.notifyWatchers(watch.Event{
Object: destination.DeepCopyObject(),
Type: watch.Modified,
})
*/
return nil
}
// Count returns number of different entries under the key (generally being path prefix).
func (s *Storage) Count(key string) (int64, error) {
return 0, nil
}
func (s *Storage) Versioner() storage.Versioner {
return &storage.APIObjectVersioner{}
}
func (s *Storage) RequestWatchProgress(ctx context.Context) error {
return nil
}

View File

@@ -0,0 +1,175 @@
package entity
import (
"bytes"
"encoding/json"
"fmt"
"reflect"
"strconv"
"time"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/grafana/pkg/services/apiserver/utils"
entityStore "github.com/grafana/grafana/pkg/services/store/entity"
)
// this is terrible... but just making it work!!!!
func entityToResource(rsp *entityStore.Entity, res runtime.Object, codec runtime.Codec) error {
var err error
// Read the body first -- it includes old resourceVersion!
if len(rsp.Body) > 0 {
decoded, _, err := codec.Decode(rsp.Body, &schema.GroupVersionKind{Group: rsp.Group, Version: rsp.GroupVersion}, res)
if err != nil {
return err
}
res = decoded
}
metaAccessor, err := meta.Accessor(res)
if err != nil {
return err
}
if len(rsp.Meta) > 0 {
err = json.Unmarshal(rsp.Meta, res)
if err != nil {
return err
}
}
metaAccessor.SetName(rsp.Name)
metaAccessor.SetNamespace(rsp.Namespace)
metaAccessor.SetUID(types.UID(rsp.Guid))
metaAccessor.SetResourceVersion(fmt.Sprintf("%d", rsp.ResourceVersion))
metaAccessor.SetCreationTimestamp(metav1.Unix(rsp.CreatedAt/1000, rsp.CreatedAt%1000*1000000))
grafanaAccessor, err := utils.MetaAccessor(metaAccessor)
if err != nil {
return err
}
if rsp.Folder != "" {
grafanaAccessor.SetFolder(rsp.Folder)
}
if rsp.CreatedBy != "" {
grafanaAccessor.SetCreatedBy(rsp.CreatedBy)
}
if rsp.UpdatedBy != "" {
grafanaAccessor.SetUpdatedBy(rsp.UpdatedBy)
}
if rsp.UpdatedAt != 0 {
updatedAt := time.UnixMilli(rsp.UpdatedAt).UTC()
grafanaAccessor.SetUpdatedTimestamp(&updatedAt)
}
grafanaAccessor.SetSlug(rsp.Slug)
if rsp.Origin != nil {
originTime := time.UnixMilli(rsp.Origin.Time).UTC()
grafanaAccessor.SetOriginInfo(&utils.ResourceOriginInfo{
Name: rsp.Origin.Source,
Key: rsp.Origin.Key,
// Path: rsp.Origin.Path,
Timestamp: &originTime,
})
}
if len(rsp.Labels) > 0 {
metaAccessor.SetLabels(rsp.Labels)
}
// TODO fields?
if len(rsp.Status) > 0 {
status := reflect.ValueOf(res).Elem().FieldByName("Status")
if status != (reflect.Value{}) && status.CanSet() {
err = json.Unmarshal(rsp.Status, status.Addr().Interface())
if err != nil {
return err
}
}
}
return nil
}
func resourceToEntity(key string, res runtime.Object, requestInfo *request.RequestInfo, codec runtime.Codec) (*entityStore.Entity, error) {
metaAccessor, err := meta.Accessor(res)
if err != nil {
return nil, err
}
grafanaAccessor, err := utils.MetaAccessor(metaAccessor)
if err != nil {
return nil, err
}
rv, _ := strconv.ParseInt(metaAccessor.GetResourceVersion(), 10, 64)
rsp := &entityStore.Entity{
Group: requestInfo.APIGroup,
GroupVersion: requestInfo.APIVersion,
Resource: requestInfo.Resource,
Subresource: requestInfo.Subresource,
Namespace: metaAccessor.GetNamespace(),
Key: key,
Name: metaAccessor.GetName(),
Guid: string(metaAccessor.GetUID()),
ResourceVersion: rv,
Folder: grafanaAccessor.GetFolder(),
CreatedAt: metaAccessor.GetCreationTimestamp().Time.UnixMilli(),
CreatedBy: grafanaAccessor.GetCreatedBy(),
UpdatedBy: grafanaAccessor.GetUpdatedBy(),
Slug: grafanaAccessor.GetSlug(),
Title: grafanaAccessor.FindTitle(metaAccessor.GetName()),
Origin: &entityStore.EntityOriginInfo{
Source: grafanaAccessor.GetOriginName(),
Key: grafanaAccessor.GetOriginKey(),
// Path: grafanaAccessor.GetOriginPath(),
},
Labels: metaAccessor.GetLabels(),
}
t, err := grafanaAccessor.GetUpdatedTimestamp()
if err != nil {
return nil, err
}
if t != nil {
rsp.UpdatedAt = t.UnixMilli()
}
t, err = grafanaAccessor.GetOriginTimestamp()
if err != nil {
return nil, err
}
if t != nil {
rsp.Origin.Time = t.UnixMilli()
}
rsp.Meta, err = json.Marshal(meta.AsPartialObjectMetadata(metaAccessor))
if err != nil {
return nil, err
}
var buf bytes.Buffer
err = codec.Encode(res, &buf)
if err != nil {
return nil, err
}
rsp.Body = buf.Bytes()
status := reflect.ValueOf(res).Elem().FieldByName("Status")
if status != (reflect.Value{}) {
rsp.Status, err = json.Marshal(status.Interface())
if err != nil {
return nil, err
}
}
return rsp, nil
}

View File

@@ -0,0 +1,213 @@
package entity
import (
"fmt"
"testing"
"time"
"github.com/grafana/grafana/pkg/apis/playlist/v0alpha1"
entityStore "github.com/grafana/grafana/pkg/services/store/entity"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apiserver/pkg/endpoints/request"
)
func TestResourceToEntity(t *testing.T) {
createdAt := metav1.Now()
createdAtStr := createdAt.UTC().Format(time.RFC3339)
// truncate here because RFC3339 doesn't support millisecond precision
// consider updating accessor to use RFC3339Nano to encode timestamps
updatedAt := createdAt.Add(time.Hour).Truncate(time.Second)
updatedAtStr := updatedAt.UTC().Format(time.RFC3339)
apiVersion := "v0alpha1"
requestInfo := &request.RequestInfo{
APIVersion: apiVersion,
}
Scheme := runtime.NewScheme()
Scheme.AddKnownTypes(v0alpha1.PlaylistResourceInfo.GroupVersion(), &v0alpha1.Playlist{})
Codecs := serializer.NewCodecFactory(Scheme)
testCases := []struct {
key string
resource runtime.Object
codec runtime.Codec
expectedKey string
expectedGroupVersion string
expectedName string
expectedTitle string
expectedGuid string
expectedVersion string
expectedFolder string
expectedCreatedAt int64
expectedUpdatedAt int64
expectedCreatedBy string
expectedUpdatedBy string
expectedSlug string
expectedOrigin *entityStore.EntityOriginInfo
expectedLabels map[string]string
expectedMeta []byte
expectedBody []byte
}{
{
key: "/playlist.grafana.app/playlists/default/test-uid",
resource: &v0alpha1.Playlist{
TypeMeta: metav1.TypeMeta{
APIVersion: apiVersion,
},
ObjectMeta: metav1.ObjectMeta{
CreationTimestamp: createdAt,
Labels: map[string]string{"label1": "value1", "label2": "value2"},
Name: "test-name",
ResourceVersion: "1",
UID: "test-uid",
Annotations: map[string]string{
"grafana.app/createdBy": "test-created-by",
"grafana.app/updatedBy": "test-updated-by",
"grafana.app/updatedTimestamp": updatedAtStr,
"grafana.app/folder": "test-folder",
"grafana.app/slug": "test-slug",
},
},
Spec: v0alpha1.Spec{
Title: "A playlist",
Interval: "5m",
Items: []v0alpha1.Item{
{Type: v0alpha1.ItemTypeDashboardByTag, Value: "panel-tests"},
{Type: v0alpha1.ItemTypeDashboardByUid, Value: "vmie2cmWz"},
},
},
},
expectedKey: "/playlist.grafana.app/playlists/default/test-uid",
expectedGroupVersion: apiVersion,
expectedName: "test-name",
expectedTitle: "A playlist",
expectedGuid: "test-uid",
expectedVersion: "1",
expectedFolder: "test-folder",
expectedCreatedAt: createdAt.UnixMilli(),
expectedUpdatedAt: updatedAt.UnixMilli(),
expectedCreatedBy: "test-created-by",
expectedUpdatedBy: "test-updated-by",
expectedSlug: "test-slug",
expectedOrigin: &entityStore.EntityOriginInfo{Source: "", Key: ""},
expectedLabels: map[string]string{"label1": "value1", "label2": "value2"},
expectedMeta: []byte(fmt.Sprintf(`{"metadata":{"name":"test-name","uid":"test-uid","resourceVersion":"1","creationTimestamp":%q,"labels":{"label1":"value1","label2":"value2"},"annotations":{"grafana.app/createdBy":"test-created-by","grafana.app/folder":"test-folder","grafana.app/slug":"test-slug","grafana.app/updatedBy":"test-updated-by","grafana.app/updatedTimestamp":%q}}}`, createdAtStr, updatedAtStr)),
expectedBody: []byte(fmt.Sprintf(`{"kind":"Playlist","apiVersion":"playlist.grafana.app/v0alpha1","metadata":{"name":"test-name","uid":"test-uid","resourceVersion":"1","creationTimestamp":%q,"labels":{"label1":"value1","label2":"value2"},"annotations":{"grafana.app/createdBy":"test-created-by","grafana.app/folder":"test-folder","grafana.app/slug":"test-slug","grafana.app/updatedBy":"test-updated-by","grafana.app/updatedTimestamp":%q}},"spec":{"title":"A playlist","interval":"5m","items":[{"type":"dashboard_by_tag","value":"panel-tests"},{"type":"dashboard_by_uid","value":"vmie2cmWz"}]}}`, createdAtStr, updatedAtStr)),
},
}
for _, tc := range testCases {
t.Run(tc.resource.GetObjectKind().GroupVersionKind().Kind+" to entity conversion should succeed", func(t *testing.T) {
entity, err := resourceToEntity(tc.key, tc.resource, requestInfo, Codecs.LegacyCodec(v0alpha1.PlaylistResourceInfo.GroupVersion()))
require.NoError(t, err)
assert.Equal(t, tc.expectedKey, entity.Key)
assert.Equal(t, tc.expectedName, entity.Name)
assert.Equal(t, tc.expectedTitle, entity.Title)
assert.Equal(t, tc.expectedGroupVersion, entity.GroupVersion)
assert.Equal(t, tc.expectedName, entity.Name)
assert.Equal(t, tc.expectedGuid, entity.Guid)
assert.Equal(t, tc.expectedFolder, entity.Folder)
assert.Equal(t, tc.expectedCreatedAt, entity.CreatedAt)
assert.Equal(t, tc.expectedUpdatedAt, entity.UpdatedAt)
assert.Equal(t, tc.expectedCreatedBy, entity.CreatedBy)
assert.Equal(t, tc.expectedUpdatedBy, entity.UpdatedBy)
assert.Equal(t, tc.expectedSlug, entity.Slug)
assert.Equal(t, tc.expectedOrigin, entity.Origin)
assert.Equal(t, tc.expectedLabels, entity.Labels)
assert.Equal(t, tc.expectedMeta, entity.Meta)
assert.Equal(t, tc.expectedBody, entity.Body[:len(entity.Body)-1]) // remove trailing newline
})
}
}
func TestEntityToResource(t *testing.T) {
createdAt := metav1.Now()
createdAtStr := createdAt.UTC().Format(time.RFC3339)
updatedAt := createdAt.Add(time.Hour)
updatedAtStr := updatedAt.UTC().Format(time.RFC3339)
Scheme := runtime.NewScheme()
Scheme.AddKnownTypes(v0alpha1.PlaylistResourceInfo.GroupVersion(), &v0alpha1.Playlist{})
Codecs := serializer.NewCodecFactory(Scheme)
testCases := []struct {
entity *entityStore.Entity
codec runtime.Codec
expectedApiVersion string
expectedCreationTimestamp metav1.Time
expectedLabels map[string]string
expectedName string
expectedResourceVersion string
expectedUid string
expectedTitle string
expectedAnnotations map[string]string
expectedSpec any
}{
{
entity: &entityStore.Entity{
Key: "/playlist.grafana.app/playlists/default/test-uid",
GroupVersion: "v0alpha1",
Name: "test-uid",
Title: "A playlist",
Guid: "test-guid",
Folder: "test-folder",
CreatedBy: "test-created-by",
CreatedAt: createdAt.UnixMilli(),
UpdatedAt: updatedAt.UnixMilli(),
UpdatedBy: "test-updated-by",
Slug: "test-slug",
Origin: &entityStore.EntityOriginInfo{},
Labels: map[string]string{"label1": "value1", "label2": "value2"},
Meta: []byte(fmt.Sprintf(`{"metadata":{"name":"test-name","uid":"test-uid","resourceVersion":"1","creationTimestamp":%q,"labels":{"label1":"value1","label2":"value2"},"annotations":{"grafana.app/createdBy":"test-created-by","grafana.app/folder":"test-folder","grafana.app/slug":"test-slug","grafana.app/updatedTimestamp":%q,"grafana.app/updatedBy":"test-updated-by"}}}`, createdAtStr, updatedAtStr)),
Body: []byte(fmt.Sprintf(`{"kind":"Playlist","apiVersion":"playlist.grafana.app/v0alpha1","metadata":{"name":"test-name","uid":"test-uid","resourceVersion":"1","creationTimestamp":%q,"labels":{"label1":"value1","label2":"value2"},"annotations":{"grafana.app/createdBy":"test-created-by","grafana.app/folder":"test-folder","grafana.app/slug":"test-slug","grafana.app/updatedBy":"test-updated-by","grafana.app/updatedTimestamp":%q}},"spec":{"title":"A playlist","interval":"5m","items":[{"type":"dashboard_by_tag","value":"panel-tests"},{"type":"dashboard_by_uid","value":"vmie2cmWz"}]}}`, createdAtStr, updatedAtStr)),
ResourceVersion: 1,
},
codec: runtime.Codec(nil),
expectedApiVersion: "playlist.grafana.app/v0alpha1",
expectedCreationTimestamp: createdAt,
expectedLabels: map[string]string{"label1": "value1", "label2": "value2"},
expectedName: "test-uid",
expectedTitle: "test-name",
expectedResourceVersion: "1",
expectedUid: "test-guid",
expectedAnnotations: map[string]string{
"grafana.app/createdBy": "test-created-by",
"grafana.app/folder": "test-folder",
"grafana.app/slug": "test-slug",
"grafana.app/updatedBy": "test-updated-by",
"grafana.app/updatedTimestamp": updatedAtStr,
},
expectedSpec: v0alpha1.Spec{
Title: "A playlist",
Interval: "5m",
Items: []v0alpha1.Item{
{Type: v0alpha1.ItemTypeDashboardByTag, Value: "panel-tests"},
{Type: v0alpha1.ItemTypeDashboardByUid, Value: "vmie2cmWz"},
},
},
},
}
for _, tc := range testCases {
t.Run(tc.entity.Key+" to resource conversion should succeed", func(t *testing.T) {
var p v0alpha1.Playlist
err := entityToResource(tc.entity, &p, Codecs.LegacyCodec(v0alpha1.PlaylistResourceInfo.GroupVersion()))
require.NoError(t, err)
assert.Equal(t, tc.expectedApiVersion, p.TypeMeta.APIVersion)
assert.Equal(t, tc.expectedCreationTimestamp.Unix(), p.ObjectMeta.CreationTimestamp.Unix())
assert.Equal(t, tc.expectedLabels, p.ObjectMeta.Labels)
assert.Equal(t, tc.expectedName, p.ObjectMeta.Name)
assert.Equal(t, tc.expectedResourceVersion, p.ObjectMeta.ResourceVersion)
assert.Equal(t, tc.expectedUid, string(p.ObjectMeta.UID))
assert.Equal(t, tc.expectedAnnotations, p.ObjectMeta.Annotations)
assert.Equal(t, tc.expectedSpec, p.Spec)
})
}
}

View File

@@ -0,0 +1,528 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/kubernetes-sigs/apiserver-runtime/blob/main/pkg/experimental/storage/filepath/jsonfile_rest.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
package file
import (
"context"
"errors"
"fmt"
"path/filepath"
"reflect"
"strings"
"time"
"github.com/bwmarrin/snowflake"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/conversion"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/storagebackend"
"k8s.io/apiserver/pkg/storage/storagebackend/factory"
"k8s.io/client-go/tools/cache"
)
const MaxUpdateAttempts = 30
var _ storage.Interface = (*Storage)(nil)
// Replace with: https://github.com/kubernetes/kubernetes/blob/v1.29.0-alpha.3/staging/src/k8s.io/apiserver/pkg/storage/errors.go#L28
// When we upgrade to 1.29
var errResourceVersionSetOnCreate = errors.New("resourceVersion should not be set on objects to be created")
// Storage implements storage.Interface and storage resources as JSON files on disk.
type Storage struct {
root string
gr schema.GroupResource
codec runtime.Codec
keyFunc func(obj runtime.Object) (string, error)
newFunc func() runtime.Object
newListFunc func() runtime.Object
getAttrsFunc storage.AttrFunc
trigger storage.IndexerFuncs
indexers *cache.Indexers
watchSet *WatchSet
}
// ErrFileNotExists means the file doesn't actually exist.
var ErrFileNotExists = fmt.Errorf("file doesn't exist")
// ErrNamespaceNotExists means the directory for the namespace doesn't actually exist.
var ErrNamespaceNotExists = errors.New("namespace does not exist")
func getResourceVersion() (*uint64, error) {
node, err := snowflake.NewNode(1)
if err != nil {
return nil, err
}
snowflakeNumber := node.Generate().Int64()
resourceVersion := uint64(snowflakeNumber)
return &resourceVersion, nil
}
// NewStorage instantiates a new Storage.
func NewStorage(
config *storagebackend.ConfigForResource,
resourcePrefix string,
keyFunc func(obj runtime.Object) (string, error),
newFunc func() runtime.Object,
newListFunc func() runtime.Object,
getAttrsFunc storage.AttrFunc,
trigger storage.IndexerFuncs,
indexers *cache.Indexers,
) (storage.Interface, factory.DestroyFunc, error) {
if err := ensureDir(resourcePrefix); err != nil {
return nil, func() {}, fmt.Errorf("could not establish a writable directory at path=%s", resourcePrefix)
}
ws := NewWatchSet()
return &Storage{
root: resourcePrefix,
gr: config.GroupResource,
codec: config.Codec,
keyFunc: keyFunc,
newFunc: newFunc,
newListFunc: newListFunc,
getAttrsFunc: getAttrsFunc,
trigger: trigger,
indexers: indexers,
watchSet: ws,
}, func() {
ws.cleanupWatchers()
}, nil
}
// Returns Versioner associated with this storage.
func (s *Storage) Versioner() storage.Versioner {
return &storage.APIObjectVersioner{}
}
// Create adds a new object at a key unless it already exists. 'ttl' is time-to-live
// in seconds (0 means forever). If no error is returned and out is not nil, out will be
// set to the read value from database.
func (s *Storage) Create(ctx context.Context, key string, obj runtime.Object, out runtime.Object, ttl uint64) error {
filename := s.filePath(key)
if exists(filename) {
return storage.NewKeyExistsError(key, 0)
}
dirname := filepath.Dir(filename)
if err := ensureDir(dirname); err != nil {
return err
}
generatedRV, err := getResourceVersion()
if err != nil {
return err
}
metaObj, err := meta.Accessor(obj)
if err != nil {
return err
}
metaObj.SetSelfLink("")
if metaObj.GetResourceVersion() != "" {
return errResourceVersionSetOnCreate
}
if err := s.Versioner().UpdateObject(obj, *generatedRV); err != nil {
return err
}
if err := writeFile(s.codec, filename, obj); err != nil {
return err
}
// set a timer to delete the file after ttl seconds
if ttl > 0 {
time.AfterFunc(time.Second*time.Duration(ttl), func() {
if err := s.Delete(ctx, key, s.newFunc(), &storage.Preconditions{}, func(ctx context.Context, obj runtime.Object) error { return nil }, obj); err != nil {
panic(err)
}
})
}
if err := s.Get(ctx, key, storage.GetOptions{}, out); err != nil {
return err
}
s.watchSet.notifyWatchers(watch.Event{
Object: out.DeepCopyObject(),
Type: watch.Added,
})
return nil
}
// Delete removes the specified key and returns the value that existed at that spot.
// If key didn't exist, it will return NotFound storage error.
// If 'cachedExistingObject' is non-nil, it can be used as a suggestion about the
// current version of the object to avoid read operation from storage to get it.
// However, the implementations have to retry in case suggestion is stale.
func (s *Storage) Delete(
ctx context.Context,
key string,
out runtime.Object,
preconditions *storage.Preconditions,
validateDeletion storage.ValidateObjectFunc,
cachedExistingObject runtime.Object,
) error {
filename := s.filePath(key)
var currentState runtime.Object
var stateIsCurrent bool
if cachedExistingObject != nil {
currentState = cachedExistingObject
} else {
getOptions := storage.GetOptions{}
if preconditions != nil && preconditions.ResourceVersion != nil {
getOptions.ResourceVersion = *preconditions.ResourceVersion
}
if err := s.Get(ctx, key, getOptions, currentState); err == nil {
stateIsCurrent = true
}
}
for {
if preconditions != nil {
if err := preconditions.Check(key, out); err != nil {
if stateIsCurrent {
return err
}
// If the state is not current, we need to re-read the state and try again.
if err := s.Get(ctx, key, storage.GetOptions{}, currentState); err != nil {
return err
}
stateIsCurrent = true
continue
}
}
if err := validateDeletion(ctx, out); err != nil {
if stateIsCurrent {
return err
}
// If the state is not current, we need to re-read the state and try again.
if err := s.Get(ctx, key, storage.GetOptions{}, currentState); err == nil {
stateIsCurrent = true
}
continue
}
if err := s.Get(ctx, key, storage.GetOptions{}, out); err != nil {
return err
}
generatedRV, err := getResourceVersion()
if err != nil {
return err
}
if err := s.Versioner().UpdateObject(out, *generatedRV); err != nil {
return err
}
if err := deleteFile(filename); err != nil {
return err
}
s.watchSet.notifyWatchers(watch.Event{
Object: out.DeepCopyObject(),
Type: watch.Deleted,
})
return nil
}
}
// Watch begins watching the specified key. Events are decoded into API objects,
// and any items selected by 'p' are sent down to returned watch.Interface.
// resourceVersion may be used to specify what version to begin watching,
// which should be the current resourceVersion, and no longer rv+1
// (e.g. reconnecting without missing any updates).
// If resource version is "0", this interface will get current object at given key
// and send it in an "ADDED" event, before watch starts.
func (s *Storage) Watch(ctx context.Context, key string, opts storage.ListOptions) (watch.Interface, error) {
p := opts.Predicate
jw := s.watchSet.newWatch()
listObj := s.newListFunc()
if opts.ResourceVersion == "0" {
err := s.GetList(ctx, key, opts, listObj)
if err != nil {
return nil, err
}
}
initEvents := make([]watch.Event, 0)
listPtr, err := meta.GetItemsPtr(listObj)
if err != nil {
return nil, err
}
v, err := conversion.EnforcePtr(listPtr)
if err != nil {
return nil, err
}
if v.IsNil() {
jw.Start(p, initEvents)
return jw, nil
}
for _, obj := range v.Elem().Interface().([]runtime.Object) {
initEvents = append(initEvents, watch.Event{
Type: watch.Added,
Object: obj.DeepCopyObject(),
})
}
jw.Start(p, initEvents)
return jw, nil
}
// Get unmarshals object found at key into objPtr. On a not found error, will either
// return a zero object of the requested type, or an error, depending on 'opts.ignoreNotFound'.
// Treats empty responses and nil response nodes exactly like a not found error.
// The returned contents may be delayed, but it is guaranteed that they will
// match 'opts.ResourceVersion' according 'opts.ResourceVersionMatch'.
func (s *Storage) Get(ctx context.Context, key string, opts storage.GetOptions, objPtr runtime.Object) error {
filename := s.filePath(key)
obj, err := readFile(s.codec, filename, func() runtime.Object {
return objPtr
})
if err != nil {
if opts.IgnoreNotFound {
return runtime.SetZeroValue(objPtr)
}
rv, err := s.Versioner().ParseResourceVersion(opts.ResourceVersion)
if err != nil {
return err
}
return storage.NewKeyNotFoundError(key, int64(rv))
}
currentVersion, err := s.Versioner().ObjectResourceVersion(obj)
if err != nil {
return err
}
if err = s.validateMinimumResourceVersion(opts.ResourceVersion, currentVersion); err != nil {
return err
}
return nil
}
// GetList unmarshalls objects found at key into a *List api object (an object
// that satisfies runtime.IsList definition).
// If 'opts.Recursive' is false, 'key' is used as an exact match. If `opts.Recursive'
// is true, 'key' is used as a prefix.
// The returned contents may be delayed, but it is guaranteed that they will
// match 'opts.ResourceVersion' according 'opts.ResourceVersionMatch'.
func (s *Storage) GetList(ctx context.Context, key string, opts storage.ListOptions, listObj runtime.Object) error {
generatedRV, err := getResourceVersion()
if err != nil {
return err
}
remainingItems := int64(0)
if err := s.Versioner().UpdateList(listObj, *generatedRV, "", &remainingItems); err != nil {
return err
}
// TODO: hack the resource version for now
// Watch is failing when set the list resourceVersion to 0, even though informers provide that in the opts
if opts.ResourceVersion == "0" {
opts.ResourceVersion = "1"
}
if opts.ResourceVersion != "" {
resourceVersionInt, err := s.Versioner().ParseResourceVersion(opts.ResourceVersion)
if err != nil {
return err
}
if err := s.Versioner().UpdateList(listObj, resourceVersionInt, "", &remainingItems); err != nil {
return err
}
}
objs, err := readDirRecursive(s.codec, key, s.newFunc)
if err != nil {
return err
}
listPtr, err := meta.GetItemsPtr(listObj)
if err != nil {
return err
}
v, err := conversion.EnforcePtr(listPtr)
if err != nil {
return err
}
for _, obj := range objs {
currentVersion, err := s.Versioner().ObjectResourceVersion(obj)
if err != nil {
return err
}
if err = s.validateMinimumResourceVersion(opts.ResourceVersion, currentVersion); err != nil {
continue
}
ok, err := opts.Predicate.Matches(obj)
if err == nil && ok {
v.Set(reflect.Append(v, reflect.ValueOf(obj).Elem()))
}
}
return nil
}
// GuaranteedUpdate keeps calling 'tryUpdate()' to update key 'key' (of type 'destination')
// retrying the update until success if there is index conflict.
// Note that object passed to tryUpdate may change across invocations of tryUpdate() if
// other writers are simultaneously updating it, so tryUpdate() needs to take into account
// the current contents of the object when deciding how the update object should look.
// If the key doesn't exist, it will return NotFound storage error if ignoreNotFound=false
// else `destination` will be set to the zero value of it's type.
// If the eventual successful invocation of `tryUpdate` returns an output with the same serialized
// contents as the input, it won't perform any update, but instead set `destination` to an object with those
// contents.
// If 'cachedExistingObject' is non-nil, it can be used as a suggestion about the
// current version of the object to avoid read operation from storage to get it.
// However, the implementations have to retry in case suggestion is stale.
func (s *Storage) GuaranteedUpdate(
ctx context.Context,
key string,
destination runtime.Object,
ignoreNotFound bool,
preconditions *storage.Preconditions,
tryUpdate storage.UpdateFunc,
cachedExistingObject runtime.Object,
) error {
var res storage.ResponseMeta
for attempt := 1; attempt <= MaxUpdateAttempts; attempt = attempt + 1 {
var (
filename = s.filePath(key)
obj runtime.Object
err error
created bool
)
if !exists(filename) && !ignoreNotFound {
return apierrors.NewNotFound(s.gr, s.nameFromKey(key))
}
obj, err = readFile(s.codec, filename, s.newFunc)
if err != nil {
// fallback to new object if the file is not found
obj = s.newFunc()
created = true
}
if err := preconditions.Check(key, obj); err != nil {
if attempt >= MaxUpdateAttempts {
return fmt.Errorf("precondition failed: %w", err)
}
continue
}
updatedObj, _, err := tryUpdate(obj, res)
if err != nil {
if attempt >= MaxUpdateAttempts {
return err
}
continue
}
unchanged, err := isUnchanged(s.codec, obj, updatedObj)
if err != nil {
return err
}
if unchanged {
u, err := conversion.EnforcePtr(updatedObj)
if err != nil {
return fmt.Errorf("unable to enforce updated object pointer: %w", err)
}
d, err := conversion.EnforcePtr(destination)
if err != nil {
return fmt.Errorf("unable to enforce destination pointer: %w", err)
}
d.Set(u)
return nil
}
generatedRV, err := getResourceVersion()
if err != nil {
return err
}
if err := s.Versioner().UpdateObject(updatedObj, *generatedRV); err != nil {
return err
}
if err := writeFile(s.codec, filename, updatedObj); err != nil {
return err
}
eventType := watch.Modified
if created {
eventType = watch.Added
}
s.watchSet.notifyWatchers(watch.Event{
Object: updatedObj.DeepCopyObject(),
Type: eventType,
})
}
return nil
}
// Count returns number of different entries under the key (generally being path prefix).
func (s *Storage) Count(key string) (int64, error) {
return 0, nil
}
// RequestWatchProgress requests the a watch stream progress status be sent in the
// watch response stream as soon as possible.
// Used for monitor watch progress even if watching resources with no changes.
//
// If watch is lagging, progress status might:
// * be pointing to stale resource version. Use etcd KV request to get linearizable resource version.
// * not be delivered at all. It's recommended to poll request progress periodically.
//
// Note: Only watches with matching context grpc metadata will be notified.
// https://github.com/kubernetes/kubernetes/blob/9325a57125e8502941d1b0c7379c4bb80a678d5c/vendor/go.etcd.io/etcd/client/v3/watch.go#L1037-L1042
//
// TODO: Remove when storage.Interface will be separate from etc3.store.
// Deprecated: Added temporarily to simplify exposing RequestProgress for watch cache.
func (s *Storage) RequestWatchProgress(ctx context.Context) error {
return nil
}
// validateMinimumResourceVersion returns a 'too large resource' version error when the provided minimumResourceVersion is
// greater than the most recent actualRevision available from storage.
func (s *Storage) validateMinimumResourceVersion(minimumResourceVersion string, actualRevision uint64) error {
if minimumResourceVersion == "" {
return nil
}
minimumRV, err := s.Versioner().ParseResourceVersion(minimumResourceVersion)
if err != nil {
return apierrors.NewBadRequest(fmt.Sprintf("invalid resource version: %v", err))
}
// Enforce the storage.Interface guarantee that the resource version of the returned data
// "will be at least 'resourceVersion'".
if minimumRV > actualRevision {
return storage.NewTooLargeResourceVersionError(minimumRV, actualRevision, 0)
}
return nil
}
func (s *Storage) nameFromKey(key string) string {
return strings.Replace(key, s.root+"/", "", 1)
}

View File

@@ -0,0 +1,60 @@
// SPDX-License-Identifier: AGPL-3.0-only
package file
import (
"path"
"time"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/registry/generic"
"k8s.io/apiserver/pkg/storage/storagebackend"
flowcontrolrequest "k8s.io/apiserver/pkg/util/flowcontrol/request"
)
var _ generic.RESTOptionsGetter = (*RESTOptionsGetter)(nil)
type RESTOptionsGetter struct {
path string
original storagebackend.Config
}
func NewRESTOptionsGetter(path string, originalStorageConfig storagebackend.Config) *RESTOptionsGetter {
if path == "" {
path = "/tmp/grafana-apiserver"
}
return &RESTOptionsGetter{path: path, original: originalStorageConfig}
}
func (r *RESTOptionsGetter) GetRESTOptions(resource schema.GroupResource) (generic.RESTOptions, error) {
storageConfig := &storagebackend.ConfigForResource{
Config: storagebackend.Config{
Type: "file",
Prefix: r.path,
Transport: storagebackend.TransportConfig{},
Codec: r.original.Codec,
EncodeVersioner: r.original.EncodeVersioner,
Transformer: r.original.Transformer,
CompactionInterval: 0,
CountMetricPollPeriod: 0,
DBMetricPollInterval: 0,
HealthcheckTimeout: 0,
ReadycheckTimeout: 0,
StorageObjectCountTracker: flowcontrolrequest.NewStorageObjectCountTracker(),
},
GroupResource: resource,
}
ret := generic.RESTOptions{
StorageConfig: storageConfig,
Decorator: NewStorage,
DeleteCollectionWorkers: 0,
EnableGarbageCollection: false,
ResourcePrefix: path.Join(storageConfig.Prefix, resource.Group, resource.Resource),
CountMetricPollPeriod: 1 * time.Second,
StorageObjectCountTracker: storageConfig.Config.StorageObjectCountTracker,
}
return ret, nil
}

View File

@@ -0,0 +1,95 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/kubernetes-sigs/apiserver-runtime/blob/main/pkg/experimental/storage/filepath/jsonfile_rest.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
package file
import (
"bytes"
"errors"
"os"
"path/filepath"
"k8s.io/apimachinery/pkg/runtime"
)
func (s *Storage) filePath(key string) string {
return key + ".json"
}
func writeFile(codec runtime.Codec, path string, obj runtime.Object) error {
buf := new(bytes.Buffer)
if err := codec.Encode(obj, buf); err != nil {
return err
}
return os.WriteFile(path, buf.Bytes(), 0600)
}
func readFile(codec runtime.Codec, path string, newFunc func() runtime.Object) (runtime.Object, error) {
content, err := os.ReadFile(filepath.Clean(path))
if err != nil {
return nil, err
}
newObj := newFunc()
decodedObj, _, err := codec.Decode(content, nil, newObj)
if err != nil {
return nil, err
}
return decodedObj, nil
}
func readDirRecursive(codec runtime.Codec, path string, newFunc func() runtime.Object) ([]runtime.Object, error) {
var objs []runtime.Object
err := filepath.Walk(path, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() || filepath.Ext(path) != ".json" {
return nil
}
obj, err := readFile(codec, path, newFunc)
if err != nil {
return err
}
objs = append(objs, obj)
return nil
})
if err != nil {
if errors.Is(err, os.ErrNotExist) {
return objs, nil
}
return nil, err
}
return objs, nil
}
func deleteFile(path string) error {
return os.Remove(path)
}
func exists(filepath string) bool {
_, err := os.Stat(filepath)
return err == nil
}
func ensureDir(dirname string) error {
if !exists(dirname) {
return os.MkdirAll(dirname, 0700)
}
return nil
}
func isUnchanged(codec runtime.Codec, obj runtime.Object, newObj runtime.Object) (bool, error) {
buf := new(bytes.Buffer)
if err := codec.Encode(obj, buf); err != nil {
return false, err
}
newBuf := new(bytes.Buffer)
if err := codec.Encode(newObj, newBuf); err != nil {
return false, err
}
return bytes.Equal(buf.Bytes(), newBuf.Bytes()), nil
}

View File

@@ -0,0 +1,103 @@
// SPDX-License-Identifier: AGPL-3.0-only
// Provenance-includes-location: https://github.com/tilt-dev/tilt-apiserver/blob/main/pkg/storage/filepath/watchset.go
// Provenance-includes-license: Apache-2.0
// Provenance-includes-copyright: The Kubernetes Authors.
package file
import (
"sync"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/storage"
)
// Keeps track of which watches need to be notified
type WatchSet struct {
mu sync.RWMutex
nodes map[int]*watchNode
counter int
}
func NewWatchSet() *WatchSet {
return &WatchSet{
nodes: make(map[int]*watchNode, 20),
counter: 0,
}
}
// Creates a new watch with a unique id, but
// does not start sending events to it until start() is called.
func (s *WatchSet) newWatch() *watchNode {
s.mu.Lock()
defer s.mu.Unlock()
s.counter++
return &watchNode{
id: s.counter,
s: s,
updateCh: make(chan watch.Event),
outCh: make(chan watch.Event),
}
}
func (s *WatchSet) cleanupWatchers() {
// Doesn't protect the below access on nodes slice since it won't ever be modified during cleanup
for _, w := range s.nodes {
w.Stop()
}
}
func (s *WatchSet) notifyWatchers(ev watch.Event) {
s.mu.RLock()
for _, w := range s.nodes {
w.updateCh <- ev
}
s.mu.RUnlock()
}
type watchNode struct {
s *WatchSet
id int
updateCh chan watch.Event
outCh chan watch.Event
}
// Start sending events to this watch.
func (w *watchNode) Start(p storage.SelectionPredicate, initEvents []watch.Event) {
w.s.mu.Lock()
w.s.nodes[w.id] = w
w.s.mu.Unlock()
go func() {
for _, e := range initEvents {
w.outCh <- e
}
for e := range w.updateCh {
ok, err := p.Matches(e.Object)
if err != nil {
continue
}
if !ok {
continue
}
w.outCh <- e
}
close(w.outCh)
}()
}
func (w *watchNode) Stop() {
w.s.mu.Lock()
delete(w.s.nodes, w.id)
w.s.mu.Unlock()
close(w.updateCh)
}
func (w *watchNode) ResultChan() <-chan watch.Event {
return w.outCh
}

View File

@@ -0,0 +1,35 @@
package utils
import (
clientrest "k8s.io/client-go/rest"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)
func FormatKubeConfig(restConfig *clientrest.Config) clientcmdapi.Config {
clusters := make(map[string]*clientcmdapi.Cluster)
clusters["default-cluster"] = &clientcmdapi.Cluster{
Server: restConfig.Host,
InsecureSkipTLSVerify: true,
}
contexts := make(map[string]*clientcmdapi.Context)
contexts["default-context"] = &clientcmdapi.Context{
Cluster: "default-cluster",
Namespace: "default",
AuthInfo: "default",
}
authinfos := make(map[string]*clientcmdapi.AuthInfo)
authinfos["default"] = &clientcmdapi.AuthInfo{
Token: restConfig.BearerToken,
}
return clientcmdapi.Config{
Kind: "Config",
APIVersion: "v1",
Clusters: clusters,
Contexts: contexts,
CurrentContext: "default-context",
AuthInfos: authinfos,
}
}

View File

@@ -0,0 +1,266 @@
package utils
import (
"fmt"
"reflect"
"time"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Annotation keys
const AnnoKeyCreatedBy = "grafana.app/createdBy"
const AnnoKeyUpdatedTimestamp = "grafana.app/updatedTimestamp"
const AnnoKeyUpdatedBy = "grafana.app/updatedBy"
const AnnoKeyFolder = "grafana.app/folder"
const AnnoKeySlug = "grafana.app/slug"
// Identify where values came from
const AnnoKeyOriginName = "grafana.app/originName"
const AnnoKeyOriginPath = "grafana.app/originPath"
const AnnoKeyOriginKey = "grafana.app/originKey"
const AnnoKeyOriginTimestamp = "grafana.app/originTimestamp"
// ResourceOriginInfo is saved in annotations. This is used to identify where the resource came from
// This object can model the same data as our existing provisioning table or a more general git sync
type ResourceOriginInfo struct {
// Name of the origin/provisioning source
Name string `json:"name,omitempty"`
// The path within the named origin above (external_id in the existing dashboard provisioing)
Path string `json:"path,omitempty"`
// Verification/identification key (check_sum in existing dashboard provisioning)
Key string `json:"key,omitempty"`
// Origin modification timestamp when the resource was saved
// This will be before the resource updated time
Timestamp *time.Time `json:"time,omitempty"`
// Avoid extending
_ any `json:"-"`
}
// Accessor functions for k8s objects
type GrafanaResourceMetaAccessor interface {
GetUpdatedTimestamp() (*time.Time, error)
SetUpdatedTimestamp(v *time.Time)
SetUpdatedTimestampMillis(unix int64)
GetCreatedBy() string
SetCreatedBy(user string)
GetUpdatedBy() string
SetUpdatedBy(user string)
GetFolder() string
SetFolder(uid string)
GetSlug() string
SetSlug(v string)
GetOriginInfo() (*ResourceOriginInfo, error)
SetOriginInfo(info *ResourceOriginInfo)
GetOriginName() string
GetOriginPath() string
GetOriginKey() string
GetOriginTimestamp() (*time.Time, error)
// Find a title in the object
// This will reflect the object and try to get:
// * spec.title
// * spec.name
// * title
// and return an empty string if nothing was found
FindTitle(defaultTitle string) string
}
var _ GrafanaResourceMetaAccessor = (*grafanaResourceMetaAccessor)(nil)
type grafanaResourceMetaAccessor struct {
raw interface{} // the original object (it implements metav1.Object)
obj metav1.Object
}
// Accessor takes an arbitrary object pointer and returns meta.Interface.
// obj must be a pointer to an API type. An error is returned if the minimum
// required fields are missing. Fields that are not required return the default
// value and are a no-op if set.
func MetaAccessor(raw interface{}) (GrafanaResourceMetaAccessor, error) {
obj, err := meta.Accessor(raw)
if err != nil {
return nil, err
}
return &grafanaResourceMetaAccessor{raw, obj}, nil
}
func (m *grafanaResourceMetaAccessor) set(key string, val string) {
anno := m.obj.GetAnnotations()
if val == "" {
if anno != nil {
delete(anno, key)
}
} else {
if anno == nil {
anno = make(map[string]string)
}
anno[key] = val
}
m.obj.SetAnnotations(anno)
}
func (m *grafanaResourceMetaAccessor) get(key string) string {
return m.obj.GetAnnotations()[key]
}
func (m *grafanaResourceMetaAccessor) GetUpdatedTimestamp() (*time.Time, error) {
v, ok := m.obj.GetAnnotations()[AnnoKeyUpdatedTimestamp]
if !ok || v == "" {
return nil, nil
}
t, err := time.Parse(time.RFC3339, v)
if err != nil {
return nil, fmt.Errorf("invalid updated timestamp: %s", err.Error())
}
return &t, nil
}
func (m *grafanaResourceMetaAccessor) SetUpdatedTimestampMillis(v int64) {
if v > 0 {
t := time.UnixMilli(v)
m.SetUpdatedTimestamp(&t)
} else {
m.set(AnnoKeyUpdatedTimestamp, "") // will clear the annotation
}
}
func (m *grafanaResourceMetaAccessor) SetUpdatedTimestamp(v *time.Time) {
txt := ""
if v != nil && v.Unix() != 0 {
txt = v.UTC().Format(time.RFC3339)
}
m.set(AnnoKeyUpdatedTimestamp, txt)
}
func (m *grafanaResourceMetaAccessor) GetCreatedBy() string {
return m.get(AnnoKeyCreatedBy)
}
func (m *grafanaResourceMetaAccessor) SetCreatedBy(user string) {
m.set(AnnoKeyCreatedBy, user)
}
func (m *grafanaResourceMetaAccessor) GetUpdatedBy() string {
return m.get(AnnoKeyUpdatedBy)
}
func (m *grafanaResourceMetaAccessor) SetUpdatedBy(user string) {
m.set(AnnoKeyUpdatedBy, user)
}
func (m *grafanaResourceMetaAccessor) GetFolder() string {
return m.get(AnnoKeyFolder)
}
func (m *grafanaResourceMetaAccessor) SetFolder(uid string) {
m.set(AnnoKeyFolder, uid)
}
func (m *grafanaResourceMetaAccessor) GetSlug() string {
return m.get(AnnoKeySlug)
}
func (m *grafanaResourceMetaAccessor) SetSlug(v string) {
m.set(AnnoKeySlug, v)
}
func (m *grafanaResourceMetaAccessor) SetOriginInfo(info *ResourceOriginInfo) {
anno := m.obj.GetAnnotations()
if anno == nil {
if info == nil {
return
}
anno = make(map[string]string, 0)
}
delete(anno, AnnoKeyOriginName)
delete(anno, AnnoKeyOriginPath)
delete(anno, AnnoKeyOriginKey)
delete(anno, AnnoKeyOriginTimestamp)
if info != nil && info.Name != "" {
anno[AnnoKeyOriginName] = info.Name
if info.Path != "" {
anno[AnnoKeyOriginPath] = info.Path
}
if info.Key != "" {
anno[AnnoKeyOriginKey] = info.Key
}
if info.Timestamp != nil {
anno[AnnoKeyOriginTimestamp] = info.Timestamp.Format(time.RFC3339)
}
}
m.obj.SetAnnotations(anno)
}
func (m *grafanaResourceMetaAccessor) GetOriginInfo() (*ResourceOriginInfo, error) {
v, ok := m.obj.GetAnnotations()[AnnoKeyOriginName]
if !ok {
return nil, nil
}
t, err := m.GetOriginTimestamp()
return &ResourceOriginInfo{
Name: v,
Path: m.GetOriginPath(),
Key: m.GetOriginKey(),
Timestamp: t,
}, err
}
func (m *grafanaResourceMetaAccessor) GetOriginName() string {
return m.get(AnnoKeyOriginName)
}
func (m *grafanaResourceMetaAccessor) GetOriginPath() string {
return m.get(AnnoKeyOriginPath)
}
func (m *grafanaResourceMetaAccessor) GetOriginKey() string {
return m.get(AnnoKeyOriginKey)
}
func (m *grafanaResourceMetaAccessor) GetOriginTimestamp() (*time.Time, error) {
v, ok := m.obj.GetAnnotations()[AnnoKeyOriginTimestamp]
if !ok || v == "" {
return nil, nil
}
t, err := time.Parse(time.RFC3339, v)
if err != nil {
return nil, fmt.Errorf("invalid origin timestamp: %s", err.Error())
}
return &t, nil
}
func (m *grafanaResourceMetaAccessor) FindTitle(defaultTitle string) string {
// look for Spec.Title or Spec.Name
r := reflect.ValueOf(m.raw)
if r.Kind() == reflect.Ptr || r.Kind() == reflect.Interface {
r = r.Elem()
}
if r.Kind() == reflect.Struct {
spec := r.FieldByName("Spec")
if spec.Kind() == reflect.Struct {
title := spec.FieldByName("Title")
if title.IsValid() && title.Kind() == reflect.String {
return title.String()
}
name := spec.FieldByName("Name")
if name.IsValid() && name.Kind() == reflect.String {
return name.String()
}
}
title := r.FieldByName("Title")
if title.IsValid() && title.Kind() == reflect.String {
return title.String()
}
}
return defaultTitle
}

View File

@@ -0,0 +1,208 @@
package utils_test
import (
"testing"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/grafana/grafana/pkg/services/apiserver/utils"
)
type TestResource struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec Spec `json:"spec,omitempty"`
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TestResource) DeepCopyInto(out *TestResource) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Playlist.
func (in *TestResource) DeepCopy() *TestResource {
if in == nil {
return nil
}
out := new(TestResource)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *TestResource) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// Spec defines model for Spec.
type Spec struct {
// Name of the object.
Title string `json:"title"`
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Spec) DeepCopyInto(out *Spec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Spec.
func (in *Spec) DeepCopy() *Spec {
if in == nil {
return nil
}
out := new(Spec)
in.DeepCopyInto(out)
return out
}
type TestResource2 struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec Spec2 `json:"spec,omitempty"`
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TestResource2) DeepCopyInto(out *TestResource2) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Playlist.
func (in *TestResource2) DeepCopy() *TestResource2 {
if in == nil {
return nil
}
out := new(TestResource2)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *TestResource2) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// Spec defines model for Spec.
type Spec2 struct{}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Spec2) DeepCopyInto(out *Spec2) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Spec.
func (in *Spec2) DeepCopy() *Spec2 {
if in == nil {
return nil
}
out := new(Spec2)
in.DeepCopyInto(out)
return out
}
func TestMetaAccessor(t *testing.T) {
originInfo := &utils.ResourceOriginInfo{
Name: "test",
Path: "a/b/c",
Key: "kkk",
}
t.Run("fails for non resource objects", func(t *testing.T) {
_, err := utils.MetaAccessor("hello")
require.Error(t, err)
_, err = utils.MetaAccessor(unstructured.Unstructured{})
require.Error(t, err) // Not a pointer!
_, err = utils.MetaAccessor(&unstructured.Unstructured{})
require.NoError(t, err) // Must be a pointer
_, err = utils.MetaAccessor(&TestResource{
Spec: Spec{
Title: "HELLO",
},
})
require.NoError(t, err) // Must be a pointer
})
t.Run("get and set grafana metadata", func(t *testing.T) {
res := &unstructured.Unstructured{}
meta, err := utils.MetaAccessor(res)
require.NoError(t, err)
meta.SetOriginInfo(originInfo)
meta.SetFolder("folderUID")
require.Equal(t, map[string]string{
"grafana.app/originName": "test",
"grafana.app/originPath": "a/b/c",
"grafana.app/originKey": "kkk",
"grafana.app/folder": "folderUID",
}, res.GetAnnotations())
})
t.Run("find titles", func(t *testing.T) {
// with a k8s object that has Spec.Title
obj := &TestResource{
Spec: Spec{
Title: "HELLO",
},
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
meta.SetOriginInfo(originInfo)
meta.SetFolder("folderUID")
require.Equal(t, map[string]string{
"grafana.app/originName": "test",
"grafana.app/originPath": "a/b/c",
"grafana.app/originKey": "kkk",
"grafana.app/folder": "folderUID",
}, obj.GetAnnotations())
require.Equal(t, "HELLO", obj.Spec.Title)
require.Equal(t, "HELLO", meta.FindTitle(""))
obj.Spec.Title = ""
require.Equal(t, "", meta.FindTitle("xxx"))
// with a k8s object without Spec.Title
obj2 := &TestResource2{}
meta, err = utils.MetaAccessor(obj2)
require.NoError(t, err)
meta.SetOriginInfo(originInfo)
meta.SetFolder("folderUID")
require.Equal(t, map[string]string{
"grafana.app/originName": "test",
"grafana.app/originPath": "a/b/c",
"grafana.app/originKey": "kkk",
"grafana.app/folder": "folderUID",
}, obj2.GetAnnotations())
require.Equal(t, "xxx", meta.FindTitle("xxx"))
})
}

View File

@@ -0,0 +1,143 @@
package utils
import (
"context"
"fmt"
"net/http"
"reflect"
"time"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/registry/rest"
)
// Based on https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/registry/rest/table.go
type customTableConvertor struct {
gr schema.GroupResource
columns []metav1.TableColumnDefinition
reader func(obj any) ([]interface{}, error)
}
func NewTableConverter(gr schema.GroupResource, columns []metav1.TableColumnDefinition, reader func(obj any) ([]interface{}, error)) rest.TableConvertor {
converter := customTableConvertor{
gr: gr,
columns: columns,
reader: reader,
}
// Replace the description on standard columns with the global values
for idx, column := range converter.columns {
if column.Description == "" {
switch column.Name {
case "Name":
converter.columns[idx].Description = swaggerMetadataDescriptions["name"]
case "Created At":
converter.columns[idx].Description = swaggerMetadataDescriptions["creationTimestamp"]
}
}
}
return converter
}
func NewDefaultTableConverter(gr schema.GroupResource) rest.TableConvertor {
return NewTableConverter(gr,
[]metav1.TableColumnDefinition{
{Name: "Name", Type: "string", Format: "name"},
{Name: "Created At", Type: "date"},
},
func(obj any) ([]interface{}, error) {
v, err := meta.Accessor(obj)
if err == nil && v != nil {
return []interface{}{
v.GetName(),
v.GetCreationTimestamp().UTC().Format(time.RFC3339),
}, nil
}
r := reflect.ValueOf(obj).Elem()
n := r.FieldByName("Name").String()
if n != "" {
return []interface{}{
n,
"",
}, nil
}
return []interface{}{
fmt.Sprintf("%v", obj),
"",
}, nil
},
)
}
var _ rest.TableConvertor = &customTableConvertor{}
var swaggerMetadataDescriptions = metav1.ObjectMeta{}.SwaggerDoc()
func (c customTableConvertor) ConvertToTable(ctx context.Context, object runtime.Object, tableOptions runtime.Object) (*metav1.Table, error) {
table, ok := object.(*metav1.Table)
if ok {
return table, nil
} else {
table = &metav1.Table{}
}
fn := func(obj runtime.Object) error {
cells, err := c.reader(obj)
if err != nil {
resource := c.gr
if info, ok := request.RequestInfoFrom(ctx); ok {
resource = schema.GroupResource{Group: info.APIGroup, Resource: info.Resource}
}
return errNotAcceptable{resource: resource}
}
table.Rows = append(table.Rows, metav1.TableRow{
Cells: cells,
Object: runtime.RawExtension{Object: obj},
})
return nil
}
switch {
case meta.IsListType(object):
if err := meta.EachListItem(object, fn); err != nil {
return nil, err
}
default:
if err := fn(object); err != nil {
return nil, err
}
}
if m, err := meta.ListAccessor(object); err == nil {
table.ResourceVersion = m.GetResourceVersion()
table.Continue = m.GetContinue()
table.RemainingItemCount = m.GetRemainingItemCount()
} else {
if m, err := meta.CommonAccessor(object); err == nil {
table.ResourceVersion = m.GetResourceVersion()
}
}
if opt, ok := tableOptions.(*metav1.TableOptions); !ok || !opt.NoHeaders {
table.ColumnDefinitions = c.columns
}
return table, nil
}
// errNotAcceptable indicates the resource doesn't support Table conversion
type errNotAcceptable struct {
resource schema.GroupResource
}
func (e errNotAcceptable) Error() string {
return fmt.Sprintf("the resource %s does not support being converted to a Table", e.resource)
}
func (e errNotAcceptable) Status() metav1.Status {
return metav1.Status{
Status: metav1.StatusFailure,
Code: http.StatusNotAcceptable,
Reason: metav1.StatusReason("NotAcceptable"),
Message: e.Error(),
}
}

View File

@@ -0,0 +1,124 @@
package utils_test
import (
"context"
"encoding/json"
"fmt"
"testing"
"time"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/grafana/grafana/pkg/services/apiserver/utils"
)
func TestTableConverter(t *testing.T) {
// dummy converter
converter := utils.NewTableConverter(
schema.GroupResource{Group: "x", Resource: "y"},
[]metav1.TableColumnDefinition{
{Name: "Name", Type: "string", Format: "name"},
{Name: "Dummy", Type: "string", Format: "string", Description: "Something here"},
{Name: "Created At", Type: "date"},
},
func(obj any) ([]interface{}, error) {
m, ok := obj.(*metav1.APIGroup)
if !ok {
return nil, fmt.Errorf("expected status")
}
ts := metav1.NewTime(time.UnixMilli(10000000))
return []interface{}{
m.Name,
"dummy",
ts.Time.UTC().Format(time.RFC3339),
}, nil
},
)
// Convert a single table
table, err := converter.ConvertToTable(context.Background(), &metav1.APIGroup{
Name: "hello",
}, nil)
require.NoError(t, err)
out, err := json.MarshalIndent(table, "", " ")
require.NoError(t, err)
//fmt.Printf("%s", string(out))
require.JSONEq(t, `{
"metadata": {},
"columnDefinitions": [
{
"name": "Name",
"type": "string",
"format": "name",
"description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names",
"priority": 0
},
{
"name": "Dummy",
"type": "string",
"format": "string",
"description": "Something here",
"priority": 0
},
{
"name": "Created At",
"type": "date",
"format": "",
"description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"priority": 0
}
],
"rows": [
{
"cells": [
"hello",
"dummy",
"1970-01-01T02:46:40Z"
],
"object": {
"name": "hello",
"versions": null,
"preferredVersion": {
"groupVersion": "",
"version": ""
}
}
}
]
}`, string(out))
// Convert something else
table, err = converter.ConvertToTable(context.Background(), &metav1.Status{}, nil)
require.Error(t, err)
require.Nil(t, table)
require.Equal(t, "the resource y.x does not support being converted to a Table", err.Error())
// Default table converter
// Convert a single table
converter = utils.NewDefaultTableConverter(schema.GroupResource{Group: "x", Resource: "y"})
table, err = converter.ConvertToTable(context.Background(), &metav1.APIGroup{
Name: "hello",
}, nil)
require.NoError(t, err)
out, err = json.MarshalIndent(table.Rows, "", " ")
require.NoError(t, err)
//fmt.Printf("%s", string(out))
require.JSONEq(t, `[
{
"cells": [
"hello",
""
],
"object": {
"name": "hello",
"versions": null,
"preferredVersion": {
"groupVersion": "",
"version": ""
}
}
}
]`, string(out))
}

View File

@@ -0,0 +1,37 @@
package utils
import (
"crypto/sha256"
"encoding/base64"
"strings"
"unicode"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
)
// Create a stable UID that will be unique across a multi-tenant cluster
// This is useful while we migrate from SQL storage to something where the UID (GUID)
// is actually baked into the storage engine itself.
func CalculateClusterWideUID(obj runtime.Object) types.UID {
gvk := obj.GetObjectKind().GroupVersionKind()
hasher := sha256.New()
hasher.Write([]byte(gvk.Group))
hasher.Write([]byte("|"))
hasher.Write([]byte(gvk.Kind))
hasher.Write([]byte("|"))
meta, err := meta.Accessor(obj)
if err == nil {
hasher.Write([]byte(meta.GetNamespace()))
hasher.Write([]byte("|"))
hasher.Write([]byte(meta.GetName()))
}
v := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
return types.UID(strings.Map(func(r rune) rune {
if !(unicode.IsLetter(r) || unicode.IsDigit(r)) {
return 'X'
}
return r
}, v))
}

View File

@@ -0,0 +1,15 @@
package apiserver
import (
"github.com/google/wire"
"github.com/grafana/grafana/pkg/services/apiserver/builder"
)
var WireSet = wire.NewSet(
ProvideService,
wire.Bind(new(RestConfigProvider), new(*service)),
wire.Bind(new(Service), new(*service)),
wire.Bind(new(DirectRestConfigProvider), new(*service)),
wire.Bind(new(builder.APIRegistrar), new(*service)),
)