⚙️ In Progress
Ability to decide when to forcibly stop runs
Currently, pressing the "Stop" button sends a SIGINT signal to the process, followed by a 30-second grace period. If the process hasn't stopped by then, a SIGKILL signal is sent. This isn't ideal. Stopping and cleaning up sometimes takes longer than 30 seconds. Forcibly killing the process often leaves behind a dangling state lock, an out-of-sync state file, or a corrupted state file. I'd like the "Stop" button to change to "Forcibly stop" after the first press, letting me decide when to force termination. If I don't press it again, the run timeout would eventually stop the process automatically.
💡 Feature Requests
6 days ago
Stacks
⚙️ In Progress
Ability to decide when to forcibly stop runs
Currently, pressing the "Stop" button sends a SIGINT signal to the process, followed by a 30-second grace period. If the process hasn't stopped by then, a SIGKILL signal is sent. This isn't ideal. Stopping and cleaning up sometimes takes longer than 30 seconds. Forcibly killing the process often leaves behind a dangling state lock, an out-of-sync state file, or a corrupted state file. I'd like the "Stop" button to change to "Forcibly stop" after the first press, letting me decide when to force termination. If I don't press it again, the run timeout would eventually stop the process automatically.
💡 Feature Requests
6 days ago
Stacks
⚙️ In Progress
Project-level filtering for AzDO VCS Integration
Implement a way to filter by project in Azure DevOps when creating a VCS integration, so that such an integration can only be used for git repositories within that project.
💡 Feature Requests
7 days ago
Integrations
⚙️ In Progress
Project-level filtering for AzDO VCS Integration
Implement a way to filter by project in Azure DevOps when creating a VCS integration, so that such an integration can only be used for git repositories within that project.
💡 Feature Requests
7 days ago
Integrations
⬆️ Gathering votes
MCP: Reduce token-heavy outputs + document tool output format
MCP tools can be inefficient with token usage. For example, List Resources may return extremely large outputs (hundreds of thousands of characters / thousands of lines), exceeding model tool limits and causing agents to resort to brittle grepping/jq workarounds. Please introduce more token-safe defaults (pagination/limits/filtered output) and improve documentation explaining the tool output format so this doesn’t happen. Example: Listing resources could return a small summary by default instead of 14k+ lines.
💡 Feature Requests
4 days ago
Spacectl
⬆️ Gathering votes
MCP: Reduce token-heavy outputs + document tool output format
MCP tools can be inefficient with token usage. For example, List Resources may return extremely large outputs (hundreds of thousands of characters / thousands of lines), exceeding model tool limits and causing agents to resort to brittle grepping/jq workarounds. Please introduce more token-safe defaults (pagination/limits/filtered output) and improve documentation explaining the tool output format so this doesn’t happen. Example: Listing resources could return a small summary by default instead of 14k+ lines.
💡 Feature Requests
4 days ago
Spacectl
🔭 Discovery
Terraform and Argo CD Deployment Orchestration
We would like a way to deploy infrastructure and application changes together in a single flow. Today, infra changes go through Spacelift (Terraform) while app changes are handled separately via Argo CD, which forces developers to update multiple repos and tools for one release. We’re looking for orchestration or integration with Argo CD so Terraform runs and app deployments can be coordinated, without replacing their existing GitOps setup.
💡 Feature Requests
18 days ago
🔭 Discovery
Terraform and Argo CD Deployment Orchestration
We would like a way to deploy infrastructure and application changes together in a single flow. Today, infra changes go through Spacelift (Terraform) while app changes are handled separately via Argo CD, which forces developers to update multiple repos and tools for one release. We’re looking for orchestration or integration with Argo CD so Terraform runs and app deployments can be coordinated, without replacing their existing GitOps setup.
💡 Feature Requests
18 days ago
➡️ Planned
spacectl: Add --wait / exit-on-complete for stack tasks
In agent workflows, models often use spacectl stack task --tail, which doesn’t exit and stalls agents. Add a --wait (or similar) mode that returns when the task completes (with success/failure exit code) and document it as the recommended option for automation/agents. Example: spacectl stack task --id --wait "terraform import ..." exits when done; --tail remains for interactive streaming.
💡 Feature Requests
4 days ago
Spacectl
➡️ Planned
spacectl: Add --wait / exit-on-complete for stack tasks
In agent workflows, models often use spacectl stack task --tail, which doesn’t exit and stalls agents. Add a --wait (or similar) mode that returns when the task completes (with success/failure exit code) and document it as the recommended option for automation/agents. Example: spacectl stack task --id --wait "terraform import ..." exits when done; --tail remains for interactive streaming.
💡 Feature Requests
4 days ago
Spacectl
🔭 Discovery
Better support for adhoc ansible runs
As an infrastructure owner, I would like to be able to execute arbitrary ansible playbooks using an existing ansible stack. Spacelift currently locks each stack to a single playbook, which makes it difficult to make use of ansible’s full capabilities for managing the operating systems and applications on our EC2 infrastructure.
💡 Feature Requests
5 days ago
🔭 Discovery
Better support for adhoc ansible runs
As an infrastructure owner, I would like to be able to execute arbitrary ansible playbooks using an existing ansible stack. Spacelift currently locks each stack to a single playbook, which makes it difficult to make use of ansible’s full capabilities for managing the operating systems and applications on our EC2 infrastructure.
💡 Feature Requests
5 days ago
⬆️ Gathering votes
Deny actions in custom roles
Currently, the custom role definition supports only whitelisted role actions. With the number of those constantly increasing (which is great by the way, kudos to the team!) it’s getting hard to track new ones and evaluate/adjust existing role definitions. In this regard, would be great to have an ability to deny certain actions from the superset of allowed ones. For example, I want to give internal users ability to manage all aspects of the space, except for worker pool. Currently, the only way in this example is to maintain the custom role with curated list of actions without WORKER_POOL_ ones.
💡 Feature Requests
6 days ago
Access Control
⬆️ Gathering votes
Deny actions in custom roles
Currently, the custom role definition supports only whitelisted role actions. With the number of those constantly increasing (which is great by the way, kudos to the team!) it’s getting hard to track new ones and evaluate/adjust existing role definitions. In this regard, would be great to have an ability to deny certain actions from the superset of allowed ones. For example, I want to give internal users ability to manage all aspects of the space, except for worker pool. Currently, the only way in this example is to maintain the custom role with curated list of actions without WORKER_POOL_ ones.
💡 Feature Requests
6 days ago
Access Control
➡️ Planned
Helm chart: support templating annotations for CRDs (Argo CD ServerSideApply)
When deploying spacelift-workerpool-controller via Argo CD, applying the WorkerPool CRD (workerpool-crd.yaml) requires ServerSideApply=true (optionally Replace=true). Since CRDs are shipped under /crds and aren’t templated, it’s currently not possible to set argocd.argoproj.io/sync-options via values.yaml. Please add Kyverno-style support for configuring CRD annotations from values to allow fully automated GitOps sync.
💡 Feature Requests
20 days ago
Helm
➡️ Planned
Helm chart: support templating annotations for CRDs (Argo CD ServerSideApply)
When deploying spacelift-workerpool-controller via Argo CD, applying the WorkerPool CRD (workerpool-crd.yaml) requires ServerSideApply=true (optionally Replace=true). Since CRDs are shipped under /crds and aren’t templated, it’s currently not possible to set argocd.argoproj.io/sync-options via values.yaml. Please add Kyverno-style support for configuring CRD annotations from values to allow fully automated GitOps sync.
💡 Feature Requests
20 days ago
Helm
🔭 Discovery
Include the run_id of an upstream stack, and any other metadata in policy inputs.
We currently have a stack that when triggered triggers it’s dependencies. We want to create one notification policy and have a unique id that we can use for the entire dependency chain since run.id is per stack. I tried to have the upstream’s stacks run_id as an output, but when dependent stacks get triggered the policies do not contain this information, nor the run_id of the upstream stack. For our purposes just a triggered_by_run_id would be perfect, but in general the feature is to include all possible metadata. Especially on the inputs the stack received from the output of an upstream stack. Something like inputs.triggered_by_outputs would be nice too.
💡 Feature Requests
7 days ago
Notifications
🔭 Discovery
Include the run_id of an upstream stack, and any other metadata in policy inputs.
We currently have a stack that when triggered triggers it’s dependencies. We want to create one notification policy and have a unique id that we can use for the entire dependency chain since run.id is per stack. I tried to have the upstream’s stacks run_id as an output, but when dependent stacks get triggered the policies do not contain this information, nor the run_id of the upstream stack. For our purposes just a triggered_by_run_id would be perfect, but in general the feature is to include all possible metadata. Especially on the inputs the stack received from the output of an upstream stack. Something like inputs.triggered_by_outputs would be nice too.
💡 Feature Requests
7 days ago
Notifications
➡️ Planned
in spacelift ui I want to be able to click on the icon showing the current run state (applying, ...) and it should take me to the current run view
💡 Feature Requests
About 1 month ago
UI/UX
➡️ Planned
in spacelift ui I want to be able to click on the icon showing the current run state (applying, ...) and it should take me to the current run view
💡 Feature Requests
About 1 month ago
UI/UX
🔭 Discovery
Programmatic access to Spacelift managed state
We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.
💡 Feature Requests
12 days ago
Stacks
🔭 Discovery
Programmatic access to Spacelift managed state
We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.
💡 Feature Requests
12 days ago
Stacks
Error when renaming Login Policy to same name but capitalized letter
Created a login policy “login” and attempted to rename it to “Login” but got the error message “A policy with this name already exists in the account.” even before submitting the form to save change. Confirmed this error behavior is consistent with other login policies. Workaround was to rename the Login Policy to a placeholder name and then rename it again with the capitalized “Login”. Error seems to be something tied to the client side input validation.
📝 Feedback
About 4 hours ago
UI/UX
Error when renaming Login Policy to same name but capitalized letter
Created a login policy “login” and attempted to rename it to “Login” but got the error message “A policy with this name already exists in the account.” even before submitting the form to save change. Confirmed this error behavior is consistent with other login policies. Workaround was to rename the Login Policy to a placeholder name and then rename it again with the capitalized “Login”. Error seems to be something tied to the client side input validation.
📝 Feedback
About 4 hours ago
UI/UX
Notification Policy - Pull Request comment not auto-resolved
By default, Pull Request comments (as outlined in the docs: https://docs.spacelift.io/concepts/policy/notification-policy#creating-a-pr-comment) are auto-resolved/closed - at least on Azure DevOps. This default behaviour is fine for the built-in PR integration which simply reports proposed changes on a PR run back to the Azure DevOps UI itself. However, if the Notification policy is being used to flag something which requires user input, it would be useful to be able to turn off this auto-resolution behaviour. That way the comment would act like a normal user comment which needs some action (or discussion) to resolve. Without this, comments can be ignored and changes can be merged without any action being taken. On Azure DevOps this is enforced under Repos > Policy > Check for comment resolution. On Github is looks like there is a “Require conversation resolution before merging” setting. Something like this could be implemented: package spacelift pull_request contains { "commit": run.commit.hash, "body": "", "auto-resolve": false, } if { ... } Where the auto-resolve input defaults to true, so as not to break existing behaviour.
💡 Feature Requests
About 10 hours ago
Notification Policy - Pull Request comment not auto-resolved
By default, Pull Request comments (as outlined in the docs: https://docs.spacelift.io/concepts/policy/notification-policy#creating-a-pr-comment) are auto-resolved/closed - at least on Azure DevOps. This default behaviour is fine for the built-in PR integration which simply reports proposed changes on a PR run back to the Azure DevOps UI itself. However, if the Notification policy is being used to flag something which requires user input, it would be useful to be able to turn off this auto-resolution behaviour. That way the comment would act like a normal user comment which needs some action (or discussion) to resolve. Without this, comments can be ignored and changes can be merged without any action being taken. On Azure DevOps this is enforced under Repos > Policy > Check for comment resolution. On Github is looks like there is a “Require conversation resolution before merging” setting. Something like this could be implemented: package spacelift pull_request contains { "commit": run.commit.hash, "body": "", "auto-resolve": false, } if { ... } Where the auto-resolve input defaults to true, so as not to break existing behaviour.
💡 Feature Requests
About 10 hours ago
⬆️ Gathering votes
Better observability for run times and worker utilization
We run self hosted Spacelift and would like the following observability. Run Execution Metrics End-to-end run duration (histogram): Distribution of total wall-clock time from run creation to terminal state. Enables tracking p50/p90/p99 durations, detecting regressions, and setting SLOs on deployment latency. Should support labels for stack, space, run type, and terminal state. Worker Pool Metrics Runs queued for workers (gauge): Count of runs currently waiting for a worker, queryable over time. Enables alerting on queue saturation and right-sizing worker pools. Should support labels for stack, space, and worker pool. Per-run worker wait time (histogram): Distribution of time each run spends waiting for a worker before execution begins. Should support labels for stack, space, and worker pool.
💡 Feature Requests
15 days ago
Workers
⬆️ Gathering votes
Better observability for run times and worker utilization
We run self hosted Spacelift and would like the following observability. Run Execution Metrics End-to-end run duration (histogram): Distribution of total wall-clock time from run creation to terminal state. Enables tracking p50/p90/p99 durations, detecting regressions, and setting SLOs on deployment latency. Should support labels for stack, space, run type, and terminal state. Worker Pool Metrics Runs queued for workers (gauge): Count of runs currently waiting for a worker, queryable over time. Enables alerting on queue saturation and right-sizing worker pools. Should support labels for stack, space, and worker pool. Per-run worker wait time (histogram): Distribution of time each run spends waiting for a worker before execution begins. Should support labels for stack, space, and worker pool.
💡 Feature Requests
15 days ago
Workers
🔭 Discovery
Kubernetes / Argo / Terraform providers
We’re very happy with Spacelift overall, it “just works” for us. Where we struggle is the Kubernetes side. The Terraform Kubernetes providers, especially the Helm provider, have caused state issues for us and generally do not work perfectly, so we have moved to using Argo for everything in the cluster and avoid having Terraform deploy Helm charts. Concretely, what would help us a lot: A quick, easy Helm chart we can deploy with Argo to run Spacelift components / private workers in our cluster. Better maintained Kubernetes and Helm provider story from your side. If you owned those and made them reliable, we would rather be using Spacelift than Argo and would happily pay for that.
💡 Feature Requests
18 days ago
🔭 Discovery
Kubernetes / Argo / Terraform providers
We’re very happy with Spacelift overall, it “just works” for us. Where we struggle is the Kubernetes side. The Terraform Kubernetes providers, especially the Helm provider, have caused state issues for us and generally do not work perfectly, so we have moved to using Argo for everything in the cluster and avoid having Terraform deploy Helm charts. Concretely, what would help us a lot: A quick, easy Helm chart we can deploy with Argo to run Spacelift components / private workers in our cluster. Better maintained Kubernetes and Helm provider story from your side. If you owned those and made them reliable, we would rather be using Spacelift than Argo and would happily pay for that.
💡 Feature Requests
18 days ago
🔭 Discovery
Agent-based stack refactor and cleanup
We have hundreds of stacks. More than half of them are in an error state. Most of those correspond to development/test infrastructure, and staffing a team to do the cleanup would be expensive. Lots of those stacks are “someone manually upgraded the database, so we need to create a PR to reflect the new DB version” or “this resource got created manually, and now the apply is failing due to the duplicate resource creation attempt.” In addition to failing stacks, some stacks are simply in disrepair. Missing imports, missing move blocks, and other defects mean that maintaining stack state is difficult. Obviously teams using Terraform should be consistent at using Terraform, rather than performing manual operations, but sometimes that’s significantly more expensive/time-consuming than clicking a button in AWS for non-production systems. There’s also a very large category of refactor-type work that’s very expensive to do manually, where you want to restructure some modules or move resources across stacks. I’d like to propose that these tasks are exceptionally appropriate for LLM agent-based workflows. We already have the technology to restrict permissions for terraform plans to dramatically minimize the risk of malicious configuration being introduced at that phase. Spacelift already has sophisticated workflow approvals. Often, the thing I want is for something or someone to operate in a loop making configuration changes and manipulating the state until terraform plan produces a plan with no delta, e.g. “No changes. Your infrastructure matches the configuration.“ That’s an incredibly clear objective to give to an LLM! I’d be thrilled if Spacelift implemented tooling to help teams achieve better resource management.
💡 Feature Requests
18 days ago
🔭 Discovery
Agent-based stack refactor and cleanup
We have hundreds of stacks. More than half of them are in an error state. Most of those correspond to development/test infrastructure, and staffing a team to do the cleanup would be expensive. Lots of those stacks are “someone manually upgraded the database, so we need to create a PR to reflect the new DB version” or “this resource got created manually, and now the apply is failing due to the duplicate resource creation attempt.” In addition to failing stacks, some stacks are simply in disrepair. Missing imports, missing move blocks, and other defects mean that maintaining stack state is difficult. Obviously teams using Terraform should be consistent at using Terraform, rather than performing manual operations, but sometimes that’s significantly more expensive/time-consuming than clicking a button in AWS for non-production systems. There’s also a very large category of refactor-type work that’s very expensive to do manually, where you want to restructure some modules or move resources across stacks. I’d like to propose that these tasks are exceptionally appropriate for LLM agent-based workflows. We already have the technology to restrict permissions for terraform plans to dramatically minimize the risk of malicious configuration being introduced at that phase. Spacelift already has sophisticated workflow approvals. Often, the thing I want is for something or someone to operate in a loop making configuration changes and manipulating the state until terraform plan produces a plan with no delta, e.g. “No changes. Your infrastructure matches the configuration.“ That’s an incredibly clear objective to give to an LLM! I’d be thrilled if Spacelift implemented tooling to help teams achieve better resource management.
💡 Feature Requests
18 days ago
⬆️ Gathering votes
API endpoint to resolve AWS ARN -> owning Spacelift stack(s)
Hey Spacelift team, I have a feature request for an API endpoint to resolve AWS ARN -> owning Spacelift stack(s). We recently needed to triage hundreds of AWS ARNs and determine whether each resource is managed by Spacelift and by which stack. Today, the only workable path we found was to ingest all stacks + entities and build a local index ourselves to query. A native ARN lookup API would be a big improvement and would dramatically reduce API load (for us, from thousands of requests to effectively 1 lookup flow).
💡 Feature Requests
4 days ago
Contexts
⬆️ Gathering votes
API endpoint to resolve AWS ARN -> owning Spacelift stack(s)
Hey Spacelift team, I have a feature request for an API endpoint to resolve AWS ARN -> owning Spacelift stack(s). We recently needed to triage hundreds of AWS ARNs and determine whether each resource is managed by Spacelift and by which stack. Today, the only workable path we found was to ingest all stacks + entities and build a local index ourselves to query. A native ARN lookup API would be a big improvement and would dramatically reduce API load (for us, from thousands of requests to effectively 1 lookup flow).
💡 Feature Requests
4 days ago
Contexts
➡️ Planned
RBAC for module delete-version
It’s currently not possible to delete a module version without the space_admin permission on the root space. I would like to see a permission like MODULE_DELETE_VERSION available in RBAC.
💡 Feature Requests
27 days ago
Access Control
➡️ Planned
RBAC for module delete-version
It’s currently not possible to delete a module version without the space_admin permission on the root space. I would like to see a permission like MODULE_DELETE_VERSION available in RBAC.
💡 Feature Requests
27 days ago
Access Control
🔭 Discovery
FIPS Support for Self-Hosted Spacelift Server Images
Build two images when releasing new versions of Spacelift, with the new one having FIPS enabled and tagged with -FIPS or something similar. The FIPS image should also include a log message that verifies it’s enable on-boot. This eliminates the need for the user to enable exec mode to confirm it for auditors. Ideally, all install methods including CloudFormation, would include an option to enable FIPS. This would automatically select the FIPS image.
💡 Feature Requests
7 days ago
Self-hosted
🔭 Discovery
FIPS Support for Self-Hosted Spacelift Server Images
Build two images when releasing new versions of Spacelift, with the new one having FIPS enabled and tagged with -FIPS or something similar. The FIPS image should also include a log message that verifies it’s enable on-boot. This eliminates the need for the user to enable exec mode to confirm it for auditors. Ideally, all install methods including CloudFormation, would include an option to enable FIPS. This would automatically select the FIPS image.
💡 Feature Requests
7 days ago
Self-hosted