Feature Requests

Got an idea for a feature request? Let us know! Share your ideas on improving existing features or suggest something new. Vote on ideas you find useful!

Make sure to read our guidelines before posting 📖

Allow direct access through terraform_remote_state data blocks to other stack's TF states regardless of the space in which they are located

Right now spaces + access control features are also extended to the TF states, so that one source stack can read the TF state from another target stack, the source stack must be administrative, the target stack must have external state access enabled, and additionally, and this is the main issue/blocker for us, the target stack has to be located in the same or a child space, as indicated here (point 3): https://support.spacelift.io/articles/9582913392-resolving-cannot-create-a-workspace-error-when-accessing-external-state This is preventing us from being able to adopt the new spaces approach because of the following reasons: Spaces are mainly intended for access control and to group/constrict which resources can be used/attached to other resources (e.g. AWS integrations, policies, etc), and for all these cases, the inheritance works in the way that each child space inherits the permissions/resources/... from the parent spaces. For example, a group that has write access in a parent space is also inheriting write access in the child space, and in the same way, an AWS integration attached to a parent space can be attached to a stack that belongs to a child space. So far so good. However, for the case of reading remote states from other stacks, this is working just in the opposite way, since based on the link shared above, an stack can only access the remote state of another stack that is in the same space or in a child space, and we have verified that this is indeed working as indicated, which basically makes useless the inheritance concept in many use cases, since it is common that stacks in child spaces (with higher permissions granted to the teams) need te read remote states from parent stacks (managing base/centralized resources), which are usually more sensitive and therefore have more restricted access. This can not be achieved in this case, since the child stacks (with wider access) has to be located in parent spaces so that they can get remote state access to the parent (more sensitive) stacks. Administrative stacks (meaning stacks that are managing other Spacelift resources) need to be placed in the parent spaces so that they are able to manage resources in Spacelift belonging to lower spaces, and this is completely ok. The pain point is that reading remote states from other stacks, which has nothing to do with managing Spacelift resources, is using the same approach. Organization and relations in Spacelift context don’t have to be necessarily related with the existing relations across the resources managed by all the existing stacks in Spacelift spread across the spaces structure, since in many cases such as product companies, those resources managed across the stacks/spaces can perfectly belong to a single platform or stack.

💡 Feature Requests

4 days ago

Super administrative stacks

Currently a stack can be administrative, which means that can create resources for its space and child spaces. However, there’s no way to create resources for parent or sibling spaces. Among other possibilities, this is useful to create contexts to other spaces (parents/siblings) in a programmatic way, while protecting some sensitive contexts from inheritance. Use case: ‘root’ space has 2 childs (‘users’ and ‘admins’). ‘root’ space contains shared resources for both of them (shared credentials, TF modules, admin stacks, etc). ‘admin‘ space contains a context with org-admin credentials, which can NOT be shared to ‘users‘ space for security reasons. Since both childs inherit from ‘root‘, the org-admin credentials’ context can NOT be on ‘root’. Then, a stack in ‘admin‘ space (by using the org-admin credentials) creates projects with scoped credentials for each of them, and these new credentials are now safe to be shared to and used from ‘users‘ space. To do so, after creating the new credentials, the stack with “super administrative“ privileges creates different contexts in ‘users‘ space for the stacks there to consume them. Alternatively, those different contexts could be created also in `root` space so they would be shared to all.

💡 Feature Requests

19 days ago

1

Support for configuring a Spacelift Stack to "track" a git tag rather than a branch

We have a use case where we want to deploy some infra into multiple environments from a repository that uses git tags to semantically version the infra. By configuring a Stack to point at a git tag rather than a branch we could align the way we use Spacelift to manage this use case with our existing DevOps process. Our use case is a little bit different than Spacelift’s typical use case where a branch is tracked for infra changes. In our case, we’re not planning changes to this infrastructure. We just want to be able to create multiple stacks that deploy this infra into different environments. If (when) we need to make changes to this infrastructure, we will approach this by reconfiguring Stacks to point at a newer git tag.

💡 Feature Requests

About 1 month ago

2

⬆️ Gathering votes

Improved Visibility into Worker Billing (P95 Model)

We’d like to request improvements to the billing visibility, specifically around how worker usage is calculated and billed under the P95 model. Background: Currently, the P95 billing model is a bit tricky to fully grasp from the UI, especially when trying to estimate how much we'll be billed or whether we’re within our buffer. We asked a few questions and got helpful answers: Example: If we occasionally exceed the limit (e.g., 6 workers for a short period), the P95 might still be within our contracted limit (e.g., 5 workers), meaning no extra charges. However, if we go far over (e.g., 30 workers for the same short time), this could push the P95 up and cause overages. So while both duration and degree of overage matter, it’s currently hard to know what our actual risk or buffer is. Feature Request: We’d love to see better visibility in the UI and/or via metrics endpoints around this billing model, specifically: Current P95 usage: Where we stand right now for the billing cycle. Projected billing impact: A clear estimate of our current or projected overages, if any. Buffer insights: An answer to the question: “How much buffer do we have left this month?” Simulation tool: Something that helps us simulate the impact of increasing worker count temporarily (e.g., during migrations between pools) so we can make informed decisions. Why it matters: This will help us: Plan operations like migrations without unintentional overages. Understand when it's safe to scale up temporarily. Avoid surprises in billing. Thanks for your consideration!

💡 Feature Requests

27 days ago

⬆️ Gathering votes

Expose Run Promotion Flag in Payload

It would be great if you were able to include a Boolean flag for promoted runs in the run portion of the data input payload. This would be useful for crafting plan and approval policies. We allow run promotion to be enabled in environments beyond DEV, but we require special approval for QA and STAGE if it’s not being deployed through the normal PR process. We can typically determine if it was a run promotion by trying to compare the stack’s branch with the run’s branch, but that’s not always possible since setting commit sha with spacectl or other methods sometimes puts “-” as the branch. It would be much simpler to just check if promoted_run: true and that the stack has a qa or stage label to require an approval. Run Promotion - Spacelift Documentation Approval policy - Spacelift Documentation Plan policy - Spacelift Documentation

💡 Feature Requests

About 1 month ago