Allow direct access through terraform_remote_state data blocks to other stack's TF states regardless of the space in which they are located
Right now spaces + access control features are also extended to the TF states, so that one source stack can read the TF state from another target stack, the source stack must be administrative, the target stack must have external state access enabled, and additionally, and this is the main issue/blocker for us, the target stack has to be located in the same or a child space, as indicated here (point 3): https://support.spacelift.io/articles/9582913392-resolving-cannot-create-a-workspace-error-when-accessing-external-state This is preventing us from being able to adopt the new spaces approach because of the following reasons: Spaces are mainly intended for access control and to group/constrict which resources can be used/attached to other resources (e.g. AWS integrations, policies, etc), and for all these cases, the inheritance works in the way that each child space inherits the permissions/resources/... from the parent spaces. For example, a group that has write access in a parent space is also inheriting write access in the child space, and in the same way, an AWS integration attached to a parent space can be attached to a stack that belongs to a child space. So far so good. However, for the case of reading remote states from other stacks, this is working just in the opposite way, since based on the link shared above, an stack can only access the remote state of another stack that is in the same space or in a child space, and we have verified that this is indeed working as indicated, which basically makes useless the inheritance concept in many use cases, since it is common that stacks in child spaces (with higher permissions granted to the teams) need te read remote states from parent stacks (managing base/centralized resources), which are usually more sensitive and therefore have more restricted access. This can not be achieved in this case, since the child stacks (with wider access) has to be located in parent spaces so that they can get remote state access to the parent (more sensitive) stacks. Administrative stacks (meaning stacks that are managing other Spacelift resources) need to be placed in the parent spaces so that they are able to manage resources in Spacelift belonging to lower spaces, and this is completely ok. The pain point is that reading remote states from other stacks, which has nothing to do with managing Spacelift resources, is using the same approach. Organization and relations in Spacelift context don’t have to be necessarily related with the existing relations across the resources managed by all the existing stacks in Spacelift spread across the spaces structure, since in many cases such as product companies, those resources managed across the stacks/spaces can perfectly belong to a single platform or stack.
💡 Feature Requests
4 days ago
Access Control
Allow direct access through terraform_remote_state data blocks to other stack's TF states regardless of the space in which they are located
Right now spaces + access control features are also extended to the TF states, so that one source stack can read the TF state from another target stack, the source stack must be administrative, the target stack must have external state access enabled, and additionally, and this is the main issue/blocker for us, the target stack has to be located in the same or a child space, as indicated here (point 3): https://support.spacelift.io/articles/9582913392-resolving-cannot-create-a-workspace-error-when-accessing-external-state This is preventing us from being able to adopt the new spaces approach because of the following reasons: Spaces are mainly intended for access control and to group/constrict which resources can be used/attached to other resources (e.g. AWS integrations, policies, etc), and for all these cases, the inheritance works in the way that each child space inherits the permissions/resources/... from the parent spaces. For example, a group that has write access in a parent space is also inheriting write access in the child space, and in the same way, an AWS integration attached to a parent space can be attached to a stack that belongs to a child space. So far so good. However, for the case of reading remote states from other stacks, this is working just in the opposite way, since based on the link shared above, an stack can only access the remote state of another stack that is in the same space or in a child space, and we have verified that this is indeed working as indicated, which basically makes useless the inheritance concept in many use cases, since it is common that stacks in child spaces (with higher permissions granted to the teams) need te read remote states from parent stacks (managing base/centralized resources), which are usually more sensitive and therefore have more restricted access. This can not be achieved in this case, since the child stacks (with wider access) has to be located in parent spaces so that they can get remote state access to the parent (more sensitive) stacks. Administrative stacks (meaning stacks that are managing other Spacelift resources) need to be placed in the parent spaces so that they are able to manage resources in Spacelift belonging to lower spaces, and this is completely ok. The pain point is that reading remote states from other stacks, which has nothing to do with managing Spacelift resources, is using the same approach. Organization and relations in Spacelift context don’t have to be necessarily related with the existing relations across the resources managed by all the existing stacks in Spacelift spread across the spaces structure, since in many cases such as product companies, those resources managed across the stacks/spaces can perfectly belong to a single platform or stack.
💡 Feature Requests
4 days ago
Access Control
Allow registering module with spacelift registry without tf in the root dir
If you try to register a module GitHub repo that doesn’t have terraform in the root, you get the following error.
💡 Feature Requests
16 days ago
Terraform registry
Allow registering module with spacelift registry without tf in the root dir
If you try to register a module GitHub repo that doesn’t have terraform in the root, you get the following error.
💡 Feature Requests
16 days ago
Terraform registry
Packer build integration
I would like to be able to run packer builds via Spacelift. More specifically, I am trying to create an EBS snapshot containing docker images as a sort of image cache. These snapshots will then be attached to an EKS node when the node is created.
💡 Feature Requests
19 days ago
Packer build integration
I would like to be able to run packer builds via Spacelift. More specifically, I am trying to create an EBS snapshot containing docker images as a sort of image cache. These snapshots will then be attached to an EKS node when the node is created.
💡 Feature Requests
19 days ago
Super administrative stacks
Currently a stack can be administrative, which means that can create resources for its space and child spaces. However, there’s no way to create resources for parent or sibling spaces. Among other possibilities, this is useful to create contexts to other spaces (parents/siblings) in a programmatic way, while protecting some sensitive contexts from inheritance. Use case: ‘root’ space has 2 childs (‘users’ and ‘admins’). ‘root’ space contains shared resources for both of them (shared credentials, TF modules, admin stacks, etc). ‘admin‘ space contains a context with org-admin credentials, which can NOT be shared to ‘users‘ space for security reasons. Since both childs inherit from ‘root‘, the org-admin credentials’ context can NOT be on ‘root’. Then, a stack in ‘admin‘ space (by using the org-admin credentials) creates projects with scoped credentials for each of them, and these new credentials are now safe to be shared to and used from ‘users‘ space. To do so, after creating the new credentials, the stack with “super administrative“ privileges creates different contexts in ‘users‘ space for the stacks there to consume them. Alternatively, those different contexts could be created also in `root` space so they would be shared to all.
💡 Feature Requests
19 days ago
Stack Dependencies
Super administrative stacks
Currently a stack can be administrative, which means that can create resources for its space and child spaces. However, there’s no way to create resources for parent or sibling spaces. Among other possibilities, this is useful to create contexts to other spaces (parents/siblings) in a programmatic way, while protecting some sensitive contexts from inheritance. Use case: ‘root’ space has 2 childs (‘users’ and ‘admins’). ‘root’ space contains shared resources for both of them (shared credentials, TF modules, admin stacks, etc). ‘admin‘ space contains a context with org-admin credentials, which can NOT be shared to ‘users‘ space for security reasons. Since both childs inherit from ‘root‘, the org-admin credentials’ context can NOT be on ‘root’. Then, a stack in ‘admin‘ space (by using the org-admin credentials) creates projects with scoped credentials for each of them, and these new credentials are now safe to be shared to and used from ‘users‘ space. To do so, after creating the new credentials, the stack with “super administrative“ privileges creates different contexts in ‘users‘ space for the stacks there to consume them. Alternatively, those different contexts could be created also in `root` space so they would be shared to all.
💡 Feature Requests
19 days ago
Stack Dependencies
Context "Used by" UI text "manually attached" is misleading
When checking a context “Used by” in the UI there are two sections: “Manually attached“ and “Auto-attached“. Since not all the not auto-attached contexts are manually attached, because they could be attached by terraform, the text is misleading. My suggestion is to change the “Manually attached“ text to something like “Explicitly attached“
💡 Feature Requests
9 days ago
Context "Used by" UI text "manually attached" is misleading
When checking a context “Used by” in the UI there are two sections: “Manually attached“ and “Auto-attached“. Since not all the not auto-attached contexts are manually attached, because they could be attached by terraform, the text is misleading. My suggestion is to change the “Manually attached“ text to something like “Explicitly attached“
💡 Feature Requests
9 days ago
Split before_$ / after_$ logs from $ step
We have a few (very verbose) scripts which run before / after certain steps (init, plan, apply). Because of the verbosity we often run into Spacelift’s log limits. This makes it more difficult to troubleshoot issues, either by humans (because we need to download the logs etc) or by AI (because it gets a whole bunch of generally irrelevant logs). If we could split the logs for these custom steps into separate overviews, it would make it easier to work with.
💡 Feature Requests
9 days ago
UI/UX
Split before_$ / after_$ logs from $ step
We have a few (very verbose) scripts which run before / after certain steps (init, plan, apply). Because of the verbosity we often run into Spacelift’s log limits. This makes it more difficult to troubleshoot issues, either by humans (because we need to download the logs etc) or by AI (because it gets a whole bunch of generally irrelevant logs). If we could split the logs for these custom steps into separate overviews, it would make it easier to work with.
💡 Feature Requests
9 days ago
UI/UX
⬆️ Gathering votes
Ability for stacks to check ServiceNow for an approved change before deployment
As we’re deploying this to our users, we’ve come across a use case where it would be really helpful if we could restrict certain stacks (production) via labels to require input of a change request, that Spacelift can check against ServiceNow (or any other compatible change management tool) to make sure the change is approved and allowed to move forward, before the stack applies.
💡 Feature Requests
11 days ago
Integrations
⬆️ Gathering votes
Ability for stacks to check ServiceNow for an approved change before deployment
As we’re deploying this to our users, we’ve come across a use case where it would be really helpful if we could restrict certain stacks (production) via labels to require input of a change request, that Spacelift can check against ServiceNow (or any other compatible change management tool) to make sure the change is approved and allowed to move forward, before the stack applies.
💡 Feature Requests
11 days ago
Integrations
⬆️ Gathering votes
Optionally hide features of the tool
We would like the ability to hide certain areas of the tool for users. For instance, cloud integrations, require Graph permissions to create the app registration in Azure, and most users dont have that access. These users have been directed to use client secret’s via environment variables to deploy to Azure, but we’d like to hide that area outright so that it’s not misleading for users. Also the service now integration that’s recently been added, as we’re not going to use it in this way, it would be good to hide this as well. In short, a customisable viewing experience for users who are not admins.
💡 Feature Requests
11 days ago
Access Control
⬆️ Gathering votes
Optionally hide features of the tool
We would like the ability to hide certain areas of the tool for users. For instance, cloud integrations, require Graph permissions to create the app registration in Azure, and most users dont have that access. These users have been directed to use client secret’s via environment variables to deploy to Azure, but we’d like to hide that area outright so that it’s not misleading for users. Also the service now integration that’s recently been added, as we’re not going to use it in this way, it would be good to hide this as well. In short, a customisable viewing experience for users who are not admins.
💡 Feature Requests
11 days ago
Access Control
⬆️ Gathering votes
Worker launcher should allow to set another default docker image
Currently, if no.spacelift/config.yml file is provided at the repository root, the launcher defaults to the Spacelift image public.ecr.aws/spacelift/runner-terraform:latest, and there’s no way to change that default for all stacks/runs. However, to not force a.spacelift/config.yml file on each repository, and at the same time provide a way for customers to have their own default image for all the runs in a workerpool, the launcher should allow to customize that default image via environment variable (e.g. SPACELIFT_LAUNCHER_DEFAULT_RUNNER_IMAGE)
💡 Feature Requests
30 days ago
⬆️ Gathering votes
Worker launcher should allow to set another default docker image
Currently, if no.spacelift/config.yml file is provided at the repository root, the launcher defaults to the Spacelift image public.ecr.aws/spacelift/runner-terraform:latest, and there’s no way to change that default for all stacks/runs. However, to not force a.spacelift/config.yml file on each repository, and at the same time provide a way for customers to have their own default image for all the runs in a workerpool, the launcher should allow to customize that default image via environment variable (e.g. SPACELIFT_LAUNCHER_DEFAULT_RUNNER_IMAGE)
💡 Feature Requests
30 days ago
Add a "cancel all pending runs" button
Sometimes a stack has 10+ pending runs. Usually an old tracked run was never approved/confirmed. I want to cancel all these pending/unconfirmed/unapproved runs except the most recent. Doing this manually is annoying, I want a button!
💡 Feature Requests
18 days ago
Add a "cancel all pending runs" button
Sometimes a stack has 10+ pending runs. Usually an old tracked run was never approved/confirmed. I want to cancel all these pending/unconfirmed/unapproved runs except the most recent. Doing this manually is annoying, I want a button!
💡 Feature Requests
18 days ago
⬆️ Gathering votes
Support Git Sparse Checkouts
Our Company uses a monorepo. All company code, including terraform, is in this repo. Because of this, the repo is quite large, and can take time to checkout / download. We’d love it if we could sparse checkout the terraform subdirectory since that’s all Spacelift needs. Maybe add it as a stack setting under source control. Scenario: An engineer needs to touch multiple stacks in their PR. That means they need to wait for all stacks to terraform plan before merging. Each stack has to pull their branch and run terraform… which means that every second we can shave off pulling their branch, the faster they can get feedback.
💡 Feature Requests
About 1 month ago
VCS
⬆️ Gathering votes
Support Git Sparse Checkouts
Our Company uses a monorepo. All company code, including terraform, is in this repo. Because of this, the repo is quite large, and can take time to checkout / download. We’d love it if we could sparse checkout the terraform subdirectory since that’s all Spacelift needs. Maybe add it as a stack setting under source control. Scenario: An engineer needs to touch multiple stacks in their PR. That means they need to wait for all stacks to terraform plan before merging. Each stack has to pull their branch and run terraform… which means that every second we can shave off pulling their branch, the faster they can get feedback.
💡 Feature Requests
About 1 month ago
VCS
Support for configuring a Spacelift Stack to "track" a git tag rather than a branch
We have a use case where we want to deploy some infra into multiple environments from a repository that uses git tags to semantically version the infra. By configuring a Stack to point at a git tag rather than a branch we could align the way we use Spacelift to manage this use case with our existing DevOps process. Our use case is a little bit different than Spacelift’s typical use case where a branch is tracked for infra changes. In our case, we’re not planning changes to this infrastructure. We just want to be able to create multiple stacks that deploy this infra into different environments. If (when) we need to make changes to this infrastructure, we will approach this by reconfiguring Stacks to point at a newer git tag.
💡 Feature Requests
About 1 month ago
Stacks
Support for configuring a Spacelift Stack to "track" a git tag rather than a branch
We have a use case where we want to deploy some infra into multiple environments from a repository that uses git tags to semantically version the infra. By configuring a Stack to point at a git tag rather than a branch we could align the way we use Spacelift to manage this use case with our existing DevOps process. Our use case is a little bit different than Spacelift’s typical use case where a branch is tracked for infra changes. In our case, we’re not planning changes to this infrastructure. We just want to be able to create multiple stacks that deploy this infra into different environments. If (when) we need to make changes to this infrastructure, we will approach this by reconfiguring Stacks to point at a newer git tag.
💡 Feature Requests
About 1 month ago
Stacks
⬆️ Gathering votes
Improved Visibility into Worker Billing (P95 Model)
We’d like to request improvements to the billing visibility, specifically around how worker usage is calculated and billed under the P95 model. Background: Currently, the P95 billing model is a bit tricky to fully grasp from the UI, especially when trying to estimate how much we'll be billed or whether we’re within our buffer. We asked a few questions and got helpful answers: Example: If we occasionally exceed the limit (e.g., 6 workers for a short period), the P95 might still be within our contracted limit (e.g., 5 workers), meaning no extra charges. However, if we go far over (e.g., 30 workers for the same short time), this could push the P95 up and cause overages. So while both duration and degree of overage matter, it’s currently hard to know what our actual risk or buffer is. Feature Request: We’d love to see better visibility in the UI and/or via metrics endpoints around this billing model, specifically: Current P95 usage: Where we stand right now for the billing cycle. Projected billing impact: A clear estimate of our current or projected overages, if any. Buffer insights: An answer to the question: “How much buffer do we have left this month?” Simulation tool: Something that helps us simulate the impact of increasing worker count temporarily (e.g., during migrations between pools) so we can make informed decisions. Why it matters: This will help us: Plan operations like migrations without unintentional overages. Understand when it's safe to scale up temporarily. Avoid surprises in billing. Thanks for your consideration!
💡 Feature Requests
27 days ago
UI/UX
⬆️ Gathering votes
Improved Visibility into Worker Billing (P95 Model)
We’d like to request improvements to the billing visibility, specifically around how worker usage is calculated and billed under the P95 model. Background: Currently, the P95 billing model is a bit tricky to fully grasp from the UI, especially when trying to estimate how much we'll be billed or whether we’re within our buffer. We asked a few questions and got helpful answers: Example: If we occasionally exceed the limit (e.g., 6 workers for a short period), the P95 might still be within our contracted limit (e.g., 5 workers), meaning no extra charges. However, if we go far over (e.g., 30 workers for the same short time), this could push the P95 up and cause overages. So while both duration and degree of overage matter, it’s currently hard to know what our actual risk or buffer is. Feature Request: We’d love to see better visibility in the UI and/or via metrics endpoints around this billing model, specifically: Current P95 usage: Where we stand right now for the billing cycle. Projected billing impact: A clear estimate of our current or projected overages, if any. Buffer insights: An answer to the question: “How much buffer do we have left this month?” Simulation tool: Something that helps us simulate the impact of increasing worker count temporarily (e.g., during migrations between pools) so we can make informed decisions. Why it matters: This will help us: Plan operations like migrations without unintentional overages. Understand when it's safe to scale up temporarily. Avoid surprises in billing. Thanks for your consideration!
💡 Feature Requests
27 days ago
UI/UX
⬆️ Gathering votes
Expose Run Promotion Flag in Payload
It would be great if you were able to include a Boolean flag for promoted runs in the run portion of the data input payload. This would be useful for crafting plan and approval policies. We allow run promotion to be enabled in environments beyond DEV, but we require special approval for QA and STAGE if it’s not being deployed through the normal PR process. We can typically determine if it was a run promotion by trying to compare the stack’s branch with the run’s branch, but that’s not always possible since setting commit sha with spacectl or other methods sometimes puts “-” as the branch. It would be much simpler to just check if promoted_run: true and that the stack has a qa or stage label to require an approval. Run Promotion - Spacelift Documentation Approval policy - Spacelift Documentation Plan policy - Spacelift Documentation
💡 Feature Requests
About 1 month ago
Access Control
⬆️ Gathering votes
Expose Run Promotion Flag in Payload
It would be great if you were able to include a Boolean flag for promoted runs in the run portion of the data input payload. This would be useful for crafting plan and approval policies. We allow run promotion to be enabled in environments beyond DEV, but we require special approval for QA and STAGE if it’s not being deployed through the normal PR process. We can typically determine if it was a run promotion by trying to compare the stack’s branch with the run’s branch, but that’s not always possible since setting commit sha with spacectl or other methods sometimes puts “-” as the branch. It would be much simpler to just check if promoted_run: true and that the stack has a qa or stage label to require an approval. Run Promotion - Spacelift Documentation Approval policy - Spacelift Documentation Plan policy - Spacelift Documentation
💡 Feature Requests
About 1 month ago
Access Control
GitOps driven Tofu plan, apply, iterate
To avoid having too many tools our developers need to focus on we would like to have a full GitOps driven Spacelift process. With merging of a PR when all changes are applied for the given stack(s) the PR refers to. And being able to iterate on a PR to reach the desired end result before merging it. There also needs to be a mechanism to avoid multiple PRs applying changes to the same stack at the same time, to avoid changes being inadvertently reverted.
💡 Feature Requests
16 days ago
GitOps driven Tofu plan, apply, iterate
To avoid having too many tools our developers need to focus on we would like to have a full GitOps driven Spacelift process. With merging of a PR when all changes are applied for the given stack(s) the PR refers to. And being able to iterate on a PR to reach the desired end result before merging it. There also needs to be a mechanism to avoid multiple PRs applying changes to the same stack at the same time, to avoid changes being inadvertently reverted.
💡 Feature Requests
16 days ago
View Organization
Views currently have no organization capabilities in the GUI, it’s simply a long list of views that cannot be searched except with CTRL+F. Please make these groupable/searchable
💡 Feature Requests
3 days ago
Access Control
View Organization
Views currently have no organization capabilities in the GUI, it’s simply a long list of views that cannot be searched except with CTRL+F. Please make these groupable/searchable
💡 Feature Requests
3 days ago
Access Control
API-Created Proposed Runs Aren't Easily Viewable
Currently the only way to view runs triggered by the API (not via source control) is to view them in the global “Runs” view. There should be a section of the console for each stack similar to PRs/Tracked Runs that show these executions.
💡 Feature Requests
3 days ago
UI/UX
API-Created Proposed Runs Aren't Easily Viewable
Currently the only way to view runs triggered by the API (not via source control) is to view them in the global “Runs” view. There should be a section of the console for each stack similar to PRs/Tracked Runs that show these executions.
💡 Feature Requests
3 days ago
UI/UX
Link to run with only run ID
I would like to have a URL path that links to a run using only the run ID. My use case related to downstream alerting pipelines that often have a Spacelift run ID (for example in a CloudTrail log event). Currently, I have to make a GraphQL runStack() query with the run ID to look up the stack ID before I can generate the https://BASE_URL/stack/STACK_ID/run/RUN_ID URL. I’d like to have something likehttps://BASE_URL/run/RUN_ID that does that lookup internally and redirects me to the canonical URL.
💡 Feature Requests
3 days ago
UI/UX
Link to run with only run ID
I would like to have a URL path that links to a run using only the run ID. My use case related to downstream alerting pipelines that often have a Spacelift run ID (for example in a CloudTrail log event). Currently, I have to make a GraphQL runStack() query with the run ID to look up the stack ID before I can generate the https://BASE_URL/stack/STACK_ID/run/RUN_ID URL. I’d like to have something likehttps://BASE_URL/run/RUN_ID that does that lookup internally and redirects me to the canonical URL.
💡 Feature Requests
3 days ago
UI/UX