🔭 Discovery
Spacelift should take terraform plan and pass into terraform apply to ensure it only plans once
When outputting terraform plan log output into a PR to allow approvers to easily review a PR, we assume these are the changes which are going to be applied to the state file which is being changed. However when the PR is merged into the default branch, the default behaviour of Spacelift is to replan off the default branch and apply those changes. However, this poses a problem- if there are changes in the environment between the last plan outputting in the PR and the second plan after merge. This could mean that Spacelift is applying changes which have not been approved by the original PR reviewer. In the real world of AWS ClickOps, this is a very real issue where another Engineer could have made manual changes in the case of an incident to mitigate an issue and Spacelift would be reverting these changes. Having an extra gate after merge and on apply to production environments would be fine at small scale, but not for those customers who are managing large scale infrastructure and want to follow a strict GitOps workflow, it is very important to make things easier to manage, not harder and the audit trail clearer.
💡 Feature Requests
8 days ago
IaC Workflows
🔭 Discovery
Spacelift should take terraform plan and pass into terraform apply to ensure it only plans once
When outputting terraform plan log output into a PR to allow approvers to easily review a PR, we assume these are the changes which are going to be applied to the state file which is being changed. However when the PR is merged into the default branch, the default behaviour of Spacelift is to replan off the default branch and apply those changes. However, this poses a problem- if there are changes in the environment between the last plan outputting in the PR and the second plan after merge. This could mean that Spacelift is applying changes which have not been approved by the original PR reviewer. In the real world of AWS ClickOps, this is a very real issue where another Engineer could have made manual changes in the case of an incident to mitigate an issue and Spacelift would be reverting these changes. Having an extra gate after merge and on apply to production environments would be fine at small scale, but not for those customers who are managing large scale infrastructure and want to follow a strict GitOps workflow, it is very important to make things easier to manage, not harder and the audit trail clearer.
💡 Feature Requests
8 days ago
IaC Workflows
➡️ Planned
Stack List “Updated at” should reflect last applied run, not any run
On the Stacks → All Stacks page, the Updated at column updates whenever a run finishes—no matter whether it applied successfully, failed, or was discarded after a plan‑only run. As a result, simply kicking off a speculative/plan‑only run bubbles that stack to the top of the list, even though the live infrastructure never changed. Why it’s a problem I use the column to spot which stacks have had real infrastructure changes or gone the longest without them. If a plan‑only (speculative) run finishes, its stack’s Updated at timestamp becomes “just now.” When you sort the stack list by that column, those speculative stacks jump to the top. The genuinely changed stacks—whose last apply happened earlier—get pushed lower in the list, so you don’t see them at a glance. Auditing recent production changes becomes noisy. Requested change Have Updated at reflect the timestamp of the most recent run that entered the Apply phase, regardless of outcome: Run Result Run result Should update “Updated at”? Applied successfully Yes Apply failed / partially applied Yes Apply cancelled Yes (when passed plan phase) Plan‑only (no apply attempted) No No‑op (plan detected no changes) No I’m neutral on whether this is handled by a single ‘Updated at’ field (showing any run that reached Apply) or by separate fields for ‘Last apply attempt’ and ‘Last successful apply.’ I mainly need to exclude plan‑only runs from the default sort. Benefits Quickly surfaces stacks whose resources were recently touched—or whose apply just failed and needs attention. Keeps speculative plan noise out of day‑to‑day monitoring and audits. Aligns “Updated at” with real‑world impact
💡 Feature Requests
9 days ago
Stacks
➡️ Planned
Stack List “Updated at” should reflect last applied run, not any run
On the Stacks → All Stacks page, the Updated at column updates whenever a run finishes—no matter whether it applied successfully, failed, or was discarded after a plan‑only run. As a result, simply kicking off a speculative/plan‑only run bubbles that stack to the top of the list, even though the live infrastructure never changed. Why it’s a problem I use the column to spot which stacks have had real infrastructure changes or gone the longest without them. If a plan‑only (speculative) run finishes, its stack’s Updated at timestamp becomes “just now.” When you sort the stack list by that column, those speculative stacks jump to the top. The genuinely changed stacks—whose last apply happened earlier—get pushed lower in the list, so you don’t see them at a glance. Auditing recent production changes becomes noisy. Requested change Have Updated at reflect the timestamp of the most recent run that entered the Apply phase, regardless of outcome: Run Result Run result Should update “Updated at”? Applied successfully Yes Apply failed / partially applied Yes Apply cancelled Yes (when passed plan phase) Plan‑only (no apply attempted) No No‑op (plan detected no changes) No I’m neutral on whether this is handled by a single ‘Updated at’ field (showing any run that reached Apply) or by separate fields for ‘Last apply attempt’ and ‘Last successful apply.’ I mainly need to exclude plan‑only runs from the default sort. Benefits Quickly surfaces stacks whose resources were recently touched—or whose apply just failed and needs attention. Keeps speculative plan noise out of day‑to‑day monitoring and audits. Aligns “Updated at” with real‑world impact
💡 Feature Requests
9 days ago
Stacks
⬆️ Gathering votes
Stack Resources tab should show address instead of name
Since names can be repeated (like “this”), displaying the full address will help to identify the resource/output/whatever in the list. Currently, you can find yourself checking entry by entry until finding the proper resource by looking to the address, which is a time consuming task.
💡 Feature Requests
15 days ago
Resources
⬆️ Gathering votes
Stack Resources tab should show address instead of name
Since names can be repeated (like “this”), displaying the full address will help to identify the resource/output/whatever in the list. Currently, you can find yourself checking entry by entry until finding the proper resource by looking to the address, which is a time consuming task.
💡 Feature Requests
15 days ago
Resources
🔭 Discovery
Ability to download state from stack
Allowing to download a stacks state-file locally. Useful for analyzing state-file content, as well as making modifications for later importing. This is useful when a stack might be stuck in a state it can’t refresh, due to bugs in providers or similar.
💡 Feature Requests
12 days ago
🔭 Discovery
Ability to download state from stack
Allowing to download a stacks state-file locally. Useful for analyzing state-file content, as well as making modifications for later importing. This is useful when a stack might be stuck in a state it can’t refresh, due to bugs in providers or similar.
💡 Feature Requests
12 days ago
⬆️ Gathering votes
Overall PR view on Dashboard
It would be handy to have a view on the Dashboard of all open PRs with proposed changes in a single place rather than having to click into every stack
💡 Feature Requests
8 days ago
UI/UX
⬆️ Gathering votes
Overall PR view on Dashboard
It would be handy to have a view on the Dashboard of all open PRs with proposed changes in a single place rather than having to click into every stack
💡 Feature Requests
8 days ago
UI/UX
🔭 Discovery
Add Stack Name to stack.sync.commit and stack.delete Audit Logs
Hi, I'd like to request a feature enhancement for Spacelift audit logs. Currently, stack names are included in stack.update and stack.create audit logs, providing important context for these operations. However, the stack name is missing from stack.sync.commit and stack.delete audit logs. This inconsistency makes it difficult to track and audit stack activities across different operation types. When reviewing audit logs for synchronization commits or deletions, we need to cross-reference with other sources to identify which stack was affected, which significantly impacts our ability to troubleshoot issues and maintain proper compliance records. {"account":"glg","action":"stack.sync_commit","actor":"saml::xxxx","context":{"mutation":"stackSyncCommit"},"data":{"args":{"ID":"272054111736_datasync"}},"event_source":"spacelift","event_type":"audit_event","remoteIP":"202.142.117.226","timestamp":1745220683892,"timestamp_millis":1745220683892} {"account":"glg","action":"stack.delete","actor":"spacelift","context":{"mutation":"stackDelete","runULID":"01JRBC295B1Z4X8XGM019RT9RC","stackSlug":"spacelift_stacks_677895703113"},"data":{"args":{"DestroyResources":null,"ID":"677895703113_gds_v3-i20"}},"event_source":"spacelift","event_type":"audit_event","remoteIP":"3.237.238.90","timestamp":1744138507179,"timestamp_millis":1744138507179}
💡 Feature Requests
20 days ago
🔭 Discovery
Add Stack Name to stack.sync.commit and stack.delete Audit Logs
Hi, I'd like to request a feature enhancement for Spacelift audit logs. Currently, stack names are included in stack.update and stack.create audit logs, providing important context for these operations. However, the stack name is missing from stack.sync.commit and stack.delete audit logs. This inconsistency makes it difficult to track and audit stack activities across different operation types. When reviewing audit logs for synchronization commits or deletions, we need to cross-reference with other sources to identify which stack was affected, which significantly impacts our ability to troubleshoot issues and maintain proper compliance records. {"account":"glg","action":"stack.sync_commit","actor":"saml::xxxx","context":{"mutation":"stackSyncCommit"},"data":{"args":{"ID":"272054111736_datasync"}},"event_source":"spacelift","event_type":"audit_event","remoteIP":"202.142.117.226","timestamp":1745220683892,"timestamp_millis":1745220683892} {"account":"glg","action":"stack.delete","actor":"spacelift","context":{"mutation":"stackDelete","runULID":"01JRBC295B1Z4X8XGM019RT9RC","stackSlug":"spacelift_stacks_677895703113"},"data":{"args":{"DestroyResources":null,"ID":"677895703113_gds_v3-i20"}},"event_source":"spacelift","event_type":"audit_event","remoteIP":"3.237.238.90","timestamp":1744138507179,"timestamp_millis":1744138507179}
💡 Feature Requests
20 days ago
⬆️ Gathering votes
Add custom IDs to rego policies samples
When debugging Rego policies, using sample { ... } is very useful to test previous real inputs. However, sometimes the filtering capability in here (Example: sample { input.stack.id == “ “ } ) is not enough to easily identify certain samples across time. The unique identifier for “Sampled previous inputs” is a timestamp (Example: 2025-05-05 12:00:00), however, it would be very useful if some extra info in here could be set. I’d suggest adding some extra info in here by default, for instance the input.stack.name, input.push.branch (Push Policy), or others. This would provide some default QoL to quickly identify which sample you’re interested in instead of going one by one until you find the one you’re looking for. Additionally or alternatively, some sample_name output could be added to allow to the user the sampling name customization. For instance, “sample_name := input.stack.name“ would end up in a title like `2025-05-05 12:00:00 `. Wdyt? This would improve debugging experience ( - frustration, - time consumption for debugging)
💡 Feature Requests
6 days ago
Policies
⬆️ Gathering votes
Add custom IDs to rego policies samples
When debugging Rego policies, using sample { ... } is very useful to test previous real inputs. However, sometimes the filtering capability in here (Example: sample { input.stack.id == “ “ } ) is not enough to easily identify certain samples across time. The unique identifier for “Sampled previous inputs” is a timestamp (Example: 2025-05-05 12:00:00), however, it would be very useful if some extra info in here could be set. I’d suggest adding some extra info in here by default, for instance the input.stack.name, input.push.branch (Push Policy), or others. This would provide some default QoL to quickly identify which sample you’re interested in instead of going one by one until you find the one you’re looking for. Additionally or alternatively, some sample_name output could be added to allow to the user the sampling name customization. For instance, “sample_name := input.stack.name“ would end up in a title like `2025-05-05 12:00:00 `. Wdyt? This would improve debugging experience ( - frustration, - time consumption for debugging)
💡 Feature Requests
6 days ago
Policies
➡️ Planned
Allow direct access through terraform_remote_state data blocks to other stack's TF states regardless of the space in which they are located
Right now spaces + access control features are also extended to the TF states, so that one source stack can read the TF state from another target stack, the source stack must be administrative, the target stack must have external state access enabled, and additionally, and this is the main issue/blocker for us, the target stack has to be located in the same or a child space, as indicated here (point 3): https://support.spacelift.io/articles/9582913392-resolving-cannot-create-a-workspace-error-when-accessing-external-state This is preventing us from being able to adopt the new spaces approach because of the following reasons: Spaces are mainly intended for access control and to group/constrict which resources can be used/attached to other resources (e.g. AWS integrations, policies, etc), and for all these cases, the inheritance works in the way that each child space inherits the permissions/resources/... from the parent spaces. For example, a group that has write access in a parent space is also inheriting write access in the child space, and in the same way, an AWS integration attached to a parent space can be attached to a stack that belongs to a child space. So far so good. However, for the case of reading remote states from other stacks, this is working just in the opposite way, since based on the link shared above, an stack can only access the remote state of another stack that is in the same space or in a child space, and we have verified that this is indeed working as indicated, which basically makes useless the inheritance concept in many use cases, since it is common that stacks in child spaces (with higher permissions granted to the teams) need te read remote states from parent stacks (managing base/centralized resources), which are usually more sensitive and therefore have more restricted access. This can not be achieved in this case, since the child stacks (with wider access) has to be located in parent spaces so that they can get remote state access to the parent (more sensitive) stacks. Administrative stacks (meaning stacks that are managing other Spacelift resources) need to be placed in the parent spaces so that they are able to manage resources in Spacelift belonging to lower spaces, and this is completely ok. The pain point is that reading remote states from other stacks, which has nothing to do with managing Spacelift resources, is using the same approach. Organization and relations in Spacelift context don’t have to be necessarily related with the existing relations across the resources managed by all the existing stacks in Spacelift spread across the spaces structure, since in many cases such as product companies, those resources managed across the stacks/spaces can perfectly belong to a single platform or stack.
💡 Feature Requests
29 days ago
Access Control
➡️ Planned
Allow direct access through terraform_remote_state data blocks to other stack's TF states regardless of the space in which they are located
Right now spaces + access control features are also extended to the TF states, so that one source stack can read the TF state from another target stack, the source stack must be administrative, the target stack must have external state access enabled, and additionally, and this is the main issue/blocker for us, the target stack has to be located in the same or a child space, as indicated here (point 3): https://support.spacelift.io/articles/9582913392-resolving-cannot-create-a-workspace-error-when-accessing-external-state This is preventing us from being able to adopt the new spaces approach because of the following reasons: Spaces are mainly intended for access control and to group/constrict which resources can be used/attached to other resources (e.g. AWS integrations, policies, etc), and for all these cases, the inheritance works in the way that each child space inherits the permissions/resources/... from the parent spaces. For example, a group that has write access in a parent space is also inheriting write access in the child space, and in the same way, an AWS integration attached to a parent space can be attached to a stack that belongs to a child space. So far so good. However, for the case of reading remote states from other stacks, this is working just in the opposite way, since based on the link shared above, an stack can only access the remote state of another stack that is in the same space or in a child space, and we have verified that this is indeed working as indicated, which basically makes useless the inheritance concept in many use cases, since it is common that stacks in child spaces (with higher permissions granted to the teams) need te read remote states from parent stacks (managing base/centralized resources), which are usually more sensitive and therefore have more restricted access. This can not be achieved in this case, since the child stacks (with wider access) has to be located in parent spaces so that they can get remote state access to the parent (more sensitive) stacks. Administrative stacks (meaning stacks that are managing other Spacelift resources) need to be placed in the parent spaces so that they are able to manage resources in Spacelift belonging to lower spaces, and this is completely ok. The pain point is that reading remote states from other stacks, which has nothing to do with managing Spacelift resources, is using the same approach. Organization and relations in Spacelift context don’t have to be necessarily related with the existing relations across the resources managed by all the existing stacks in Spacelift spread across the spaces structure, since in many cases such as product companies, those resources managed across the stacks/spaces can perfectly belong to a single platform or stack.
💡 Feature Requests
29 days ago
Access Control
🔭 Discovery
Ability to view full run history of stack
It would be useful to have a history of all runs of a specific stack, including previous proposed runs, local previews and tracked runs
💡 Feature Requests
12 days ago
Stacks
🔭 Discovery
Ability to view full run history of stack
It would be useful to have a history of all runs of a specific stack, including previous proposed runs, local previews and tracked runs
💡 Feature Requests
12 days ago
Stacks
🔭 Discovery
github runs
hi! we sometimes have multiple stacks in a single repository with the same name for different environments. This is tricky if we update all stacks to find the correct one as the list is basically several times the same name in a row. Can you do something about that so that we know which stack is which before clicking?
📝 Feedback
About 19 hours ago
🔭 Discovery
github runs
hi! we sometimes have multiple stacks in a single repository with the same name for different environments. This is tricky if we update all stacks to find the correct one as the list is basically several times the same name in a row. Can you do something about that so that we know which stack is which before clicking?
📝 Feedback
About 19 hours ago
⚙️ In Progress
Some "Github API error" notifications lack necessary information to debug
I like that Spacelift sends these error messages, but this latest one lacks necessary context for me to investigate. Is it possible to include more details on the API request/response being made or a linked run(s) if that exists?
📝 Feedback
1 day ago
⚙️ In Progress
Some "Github API error" notifications lack necessary information to debug
I like that Spacelift sends these error messages, but this latest one lacks necessary context for me to investigate. Is it possible to include more details on the API request/response being made or a linked run(s) if that exists?
📝 Feedback
1 day ago
New integration request from the website: Jira - Specifically to show status of changes related to Jira tasks. We get this with other deployment tools and it helps track status of deployments by displaying those in the Jira task. It would be super helpful to see if a Spacelift PR/Tracked run was planning, blocked, waiting approval, applying or applied.
Cody Dunlap requested a new integration from the website: Jira - Specifically to show status of changes related to Jira tasks. We get this with other deployment tools and it helps track status of deployments by displaying those in the Jira task. It would be super helpful to see if a Spacelift PR/Tracked run was planning, blocked, waiting approval, applying or applied.
💡 Feature Requests
2 days ago
New integration request from the website: Jira - Specifically to show status of changes related to Jira tasks. We get this with other deployment tools and it helps track status of deployments by displaying those in the Jira task. It would be super helpful to see if a Spacelift PR/Tracked run was planning, blocked, waiting approval, applying or applied.
Cody Dunlap requested a new integration from the website: Jira - Specifically to show status of changes related to Jira tasks. We get this with other deployment tools and it helps track status of deployments by displaying those in the Jira task. It would be super helpful to see if a Spacelift PR/Tracked run was planning, blocked, waiting approval, applying or applied.
💡 Feature Requests
2 days ago
Scheduling Random Distribution / Jitter
This feature is based around a feature built into Jenkins scheduling. You can use an “H” character in a CRON-style expression to allow random distribution of executions. This makes it MUCH easier to enable drift detection on a large number of stacks without randomizing your own schedules. https://www.jenkins.io/doc/book/using/best-practices/#avoid-scheduling-overload
💡 Feature Requests
3 days ago
Scheduling Random Distribution / Jitter
This feature is based around a feature built into Jenkins scheduling. You can use an “H” character in a CRON-style expression to allow random distribution of executions. This makes it MUCH easier to enable drift detection on a large number of stacks without randomizing your own schedules. https://www.jenkins.io/doc/book/using/best-practices/#avoid-scheduling-overload
💡 Feature Requests
3 days ago
⬆️ Gathering votes
Worker launcher should allow to set another default docker image
Currently, if no.spacelift/config.yml file is provided at the repository root, the launcher defaults to the Spacelift image public.ecr.aws/spacelift/runner-terraform:latest, and there’s no way to change that default for all stacks/runs. However, to not force a.spacelift/config.yml file on each repository, and at the same time provide a way for customers to have their own default image for all the runs in a workerpool, the launcher should allow to customize that default image via environment variable (e.g. SPACELIFT_LAUNCHER_DEFAULT_RUNNER_IMAGE)
💡 Feature Requests
About 2 months ago
⬆️ Gathering votes
Worker launcher should allow to set another default docker image
Currently, if no.spacelift/config.yml file is provided at the repository root, the launcher defaults to the Spacelift image public.ecr.aws/spacelift/runner-terraform:latest, and there’s no way to change that default for all stacks/runs. However, to not force a.spacelift/config.yml file on each repository, and at the same time provide a way for customers to have their own default image for all the runs in a workerpool, the launcher should allow to customize that default image via environment variable (e.g. SPACELIFT_LAUNCHER_DEFAULT_RUNNER_IMAGE)
💡 Feature Requests
About 2 months ago
🔭 Discovery
Allow registering module with spacelift registry without tf in the root dir
If you try to register a module GitHub repo that doesn’t have terraform in the root, you get the following error.
💡 Feature Requests
About 1 month ago
Terraform registry
🔭 Discovery
Allow registering module with spacelift registry without tf in the root dir
If you try to register a module GitHub repo that doesn’t have terraform in the root, you get the following error.
💡 Feature Requests
About 1 month ago
Terraform registry
➡️ Planned
Allow adding space hierarchy in OIDC subject
Request: Have a option in Spacelift to enable full space hierarchy in the space claim in the OIDC subject.e.g. Default. behavior: space: : Proposed feature enabled: space:root→space1_id→space1_child_id:
💡 Feature Requests
About 1 month ago
Integrations
➡️ Planned
Allow adding space hierarchy in OIDC subject
Request: Have a option in Spacelift to enable full space hierarchy in the space claim in the OIDC subject.e.g. Default. behavior: space: : Proposed feature enabled: space:root→space1_id→space1_child_id:
💡 Feature Requests
About 1 month ago
Integrations
🔭 Discovery
Sorting by Delta
Currently, the filters don’t allow you to sort by delta. I think this could be as simple as sorting as the sum of all changes, or potentially giving more weight to updates/deletes.
💡 Feature Requests
8 days ago
UI/UX
🔭 Discovery
Sorting by Delta
Currently, the filters don’t allow you to sort by delta. I think this could be as simple as sorting as the sum of all changes, or potentially giving more weight to updates/deletes.
💡 Feature Requests
8 days ago
UI/UX
🔭 Discovery
Registry Credential Generation for OIDC API Integrations
Currently there is no way to generate a registry credential for API Credentials. I find it a bit coutnerintuitive that the suggested fix for getting a registry credential for an OIDC API integration is to “generate an API key” alongside the OIDC API integration. That completely nullifies the point of going keyless with OIDC.
💡 Feature Requests
8 days ago
OIDC
🔭 Discovery
Registry Credential Generation for OIDC API Integrations
Currently there is no way to generate a registry credential for API Credentials. I find it a bit coutnerintuitive that the suggested fix for getting a registry credential for an OIDC API integration is to “generate an API key” alongside the OIDC API integration. That completely nullifies the point of going keyless with OIDC.
💡 Feature Requests
8 days ago
OIDC
🔭 Discovery
Extending Policy Input Data to Isolate PR-Specific Resource Changes
You provided Require commits to be reasonably sized Policy, which is useful, but we want to see if there is a way to support compare the proposed run’s resource change with current status’s of the stack so we can only alert on resource changes on this specific PR instead of including resource changed from drift or un-applied tracked run. This may need to add more data information in input_data.
💡 Feature Requests
9 days ago
Drift Detection
🔭 Discovery
Extending Policy Input Data to Isolate PR-Specific Resource Changes
You provided Require commits to be reasonably sized Policy, which is useful, but we want to see if there is a way to support compare the proposed run’s resource change with current status’s of the stack so we can only alert on resource changes on this specific PR instead of including resource changed from drift or un-applied tracked run. This may need to add more data information in input_data.
💡 Feature Requests
9 days ago
Drift Detection