2017-01-21 13:22:40 -06:00
|
|
|
package terraform
|
|
|
|
|
|
|
|
import (
|
core: New refresh graph building behaviour
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
2017-04-30 01:07:01 -05:00
|
|
|
"log"
|
|
|
|
|
2017-01-21 13:22:40 -06:00
|
|
|
"github.com/hashicorp/terraform/config"
|
|
|
|
"github.com/hashicorp/terraform/config/module"
|
|
|
|
"github.com/hashicorp/terraform/dag"
|
|
|
|
)
|
|
|
|
|
|
|
|
// RefreshGraphBuilder implements GraphBuilder and is responsible for building
|
|
|
|
// a graph for refreshing (updating the Terraform state).
|
|
|
|
//
|
|
|
|
// The primary difference between this graph and others:
|
|
|
|
//
|
|
|
|
// * Based on the state since it represents the only resources that
|
|
|
|
// need to be refreshed.
|
|
|
|
//
|
|
|
|
// * Ignores lifecycle options since no lifecycle events occur here. This
|
|
|
|
// simplifies the graph significantly since complex transforms such as
|
|
|
|
// create-before-destroy can be completely ignored.
|
|
|
|
//
|
|
|
|
type RefreshGraphBuilder struct {
|
|
|
|
// Module is the root module for the graph to build.
|
|
|
|
Module *module.Tree
|
|
|
|
|
|
|
|
// State is the current state
|
|
|
|
State *State
|
|
|
|
|
|
|
|
// Providers is the list of providers supported.
|
|
|
|
Providers []string
|
|
|
|
|
|
|
|
// Targets are resources to target
|
|
|
|
Targets []string
|
|
|
|
|
|
|
|
// DisableReduce, if true, will not reduce the graph. Great for testing.
|
|
|
|
DisableReduce bool
|
|
|
|
|
|
|
|
// Validate will do structural validation of the graph.
|
|
|
|
Validate bool
|
|
|
|
}
|
|
|
|
|
|
|
|
// See GraphBuilder
|
|
|
|
func (b *RefreshGraphBuilder) Build(path []string) (*Graph, error) {
|
|
|
|
return (&BasicGraphBuilder{
|
|
|
|
Steps: b.Steps(),
|
|
|
|
Validate: b.Validate,
|
|
|
|
Name: "RefreshGraphBuilder",
|
|
|
|
}).Build(path)
|
|
|
|
}
|
|
|
|
|
|
|
|
// See GraphBuilder
|
|
|
|
func (b *RefreshGraphBuilder) Steps() []GraphTransformer {
|
|
|
|
// Custom factory for creating providers.
|
|
|
|
concreteProvider := func(a *NodeAbstractProvider) dag.Vertex {
|
|
|
|
return &NodeApplyableProvider{
|
|
|
|
NodeAbstractProvider: a,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
core: New refresh graph building behaviour
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
2017-04-30 01:07:01 -05:00
|
|
|
concreteManagedResource := func(a *NodeAbstractResource) dag.Vertex {
|
|
|
|
return &NodeRefreshableManagedResource{
|
|
|
|
NodeAbstractCountResource: &NodeAbstractCountResource{
|
|
|
|
NodeAbstractResource: a,
|
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
concreteManagedResourceInstance := func(a *NodeAbstractResource) dag.Vertex {
|
|
|
|
return &NodeRefreshableManagedResourceInstance{
|
2017-01-21 13:22:40 -06:00
|
|
|
NodeAbstractResource: a,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-01-22 18:05:10 -06:00
|
|
|
concreteDataResource := func(a *NodeAbstractResource) dag.Vertex {
|
|
|
|
return &NodeRefreshableDataResource{
|
|
|
|
NodeAbstractCountResource: &NodeAbstractCountResource{
|
|
|
|
NodeAbstractResource: a,
|
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-01-21 13:22:40 -06:00
|
|
|
steps := []GraphTransformer{
|
core: New refresh graph building behaviour
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
2017-04-30 01:07:01 -05:00
|
|
|
// Creates all the managed resources that aren't in the state, but only if
|
|
|
|
// we have a state already. No resources in state means there's not
|
|
|
|
// anything to refresh.
|
|
|
|
func() GraphTransformer {
|
|
|
|
if b.State.HasResources() {
|
|
|
|
return &ConfigTransformer{
|
|
|
|
Concrete: concreteManagedResource,
|
|
|
|
Module: b.Module,
|
|
|
|
Unique: true,
|
|
|
|
ModeFilter: true,
|
|
|
|
Mode: config.ManagedResourceMode,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
log.Println("[TRACE] No managed resources in state during refresh, skipping managed resource transformer")
|
|
|
|
return nil
|
|
|
|
}(),
|
2017-01-21 13:22:40 -06:00
|
|
|
|
core: New refresh graph building behaviour
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
2017-04-30 01:07:01 -05:00
|
|
|
// Creates all the data resources that aren't in the state. This will also
|
|
|
|
// add any orphans from scaling in as destroy nodes.
|
2017-01-21 13:22:40 -06:00
|
|
|
&ConfigTransformer{
|
2017-01-22 18:05:10 -06:00
|
|
|
Concrete: concreteDataResource,
|
2017-01-21 13:22:40 -06:00
|
|
|
Module: b.Module,
|
|
|
|
Unique: true,
|
|
|
|
ModeFilter: true,
|
|
|
|
Mode: config.DataResourceMode,
|
|
|
|
},
|
|
|
|
|
core: New refresh graph building behaviour
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
2017-04-30 01:07:01 -05:00
|
|
|
// Add any fully-orphaned resources from config (ones that have been
|
|
|
|
// removed completely, not ones that are just orphaned due to a scaled-in
|
|
|
|
// count.
|
|
|
|
&OrphanResourceTransformer{
|
|
|
|
Concrete: concreteManagedResourceInstance,
|
|
|
|
State: b.State,
|
|
|
|
Module: b.Module,
|
|
|
|
},
|
|
|
|
|
2017-01-21 13:22:40 -06:00
|
|
|
// Attach the state
|
|
|
|
&AttachStateTransformer{State: b.State},
|
|
|
|
|
|
|
|
// Attach the configuration to any resources
|
|
|
|
&AttachResourceConfigTransformer{Module: b.Module},
|
|
|
|
|
|
|
|
// Add root variables
|
|
|
|
&RootVariableTransformer{Module: b.Module},
|
|
|
|
|
2017-11-02 09:02:51 -05:00
|
|
|
TransformProviders(b.Providers, concreteProvider, b.Module),
|
2017-01-21 13:22:40 -06:00
|
|
|
|
2017-07-01 11:48:37 -05:00
|
|
|
// Add the local values
|
|
|
|
&LocalTransformer{Module: b.Module},
|
|
|
|
|
2017-01-21 13:22:40 -06:00
|
|
|
// Add the outputs
|
|
|
|
&OutputTransformer{Module: b.Module},
|
|
|
|
|
|
|
|
// Add module variables
|
|
|
|
&ModuleVariableTransformer{Module: b.Module},
|
|
|
|
|
|
|
|
// Connect so that the references are ready for targeting. We'll
|
|
|
|
// have to connect again later for providers and so on.
|
|
|
|
&ReferenceTransformer{},
|
|
|
|
|
|
|
|
// Target
|
core: -target option to also select resources in descendant modules
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.
This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.
We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.
Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.
This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
2017-06-15 20:15:41 -05:00
|
|
|
&TargetsTransformer{
|
|
|
|
Targets: b.Targets,
|
|
|
|
|
|
|
|
// Resource nodes from config have not yet been expanded for
|
|
|
|
// "count", so we must apply targeting without indices. Exact
|
|
|
|
// targeting will be dealt with later when these resources
|
|
|
|
// DynamicExpand.
|
|
|
|
IgnoreIndices: true,
|
|
|
|
},
|
2017-01-21 13:22:40 -06:00
|
|
|
|
2017-04-12 16:25:15 -05:00
|
|
|
// Close opened plugin connections
|
|
|
|
&CloseProviderTransformer{},
|
|
|
|
|
2017-01-21 13:22:40 -06:00
|
|
|
// Single root
|
|
|
|
&RootTransformer{},
|
|
|
|
}
|
|
|
|
|
|
|
|
if !b.DisableReduce {
|
|
|
|
// Perform the transitive reduction to make our graph a bit
|
|
|
|
// more sane if possible (it usually is possible).
|
|
|
|
steps = append(steps, &TransitiveReductionTransformer{})
|
|
|
|
}
|
|
|
|
|
|
|
|
return steps
|
|
|
|
}
|