➡️ Planned
Helm chart: support templating annotations for CRDs (Argo CD ServerSideApply)
When deploying spacelift-workerpool-controller via Argo CD, applying the WorkerPool CRD (workerpool-crd.yaml) requires ServerSideApply=true (optionally Replace=true). Since CRDs are shipped under /crds and aren’t templated, it’s currently not possible to set argocd.argoproj.io/sync-options via values.yaml. Please add Kyverno-style support for configuring CRD annotations from values to allow fully automated GitOps sync.
💡 Feature Requests
9 days ago
Helm
➡️ Planned
Helm chart: support templating annotations for CRDs (Argo CD ServerSideApply)
When deploying spacelift-workerpool-controller via Argo CD, applying the WorkerPool CRD (workerpool-crd.yaml) requires ServerSideApply=true (optionally Replace=true). Since CRDs are shipped under /crds and aren’t templated, it’s currently not possible to set argocd.argoproj.io/sync-options via values.yaml. Please add Kyverno-style support for configuring CRD annotations from values to allow fully automated GitOps sync.
💡 Feature Requests
9 days ago
Helm
🔭 Discovery
Terraform and Argo CD Deployment Orchestration
We would like a way to deploy infrastructure and application changes together in a single flow. Today, infra changes go through Spacelift (Terraform) while app changes are handled separately via Argo CD, which forces developers to update multiple repos and tools for one release. We’re looking for orchestration or integration with Argo CD so Terraform runs and app deployments can be coordinated, without replacing their existing GitOps setup.
💡 Feature Requests
7 days ago
🔭 Discovery
Terraform and Argo CD Deployment Orchestration
We would like a way to deploy infrastructure and application changes together in a single flow. Today, infra changes go through Spacelift (Terraform) while app changes are handled separately via Argo CD, which forces developers to update multiple repos and tools for one release. We’re looking for orchestration or integration with Argo CD so Terraform runs and app deployments can be coordinated, without replacing their existing GitOps setup.
💡 Feature Requests
7 days ago
➡️ Planned
in spacelift ui I want to be able to click on the icon showing the current run state (applying, ...) and it should take me to the current run view
💡 Feature Requests
28 days ago
UI/UX
➡️ Planned
in spacelift ui I want to be able to click on the icon showing the current run state (applying, ...) and it should take me to the current run view
💡 Feature Requests
28 days ago
UI/UX
🔭 Discovery
Kubernetes / Argo / Terraform providers
We’re very happy with Spacelift overall, it “just works” for us. Where we struggle is the Kubernetes side. The Terraform Kubernetes providers, especially the Helm provider, have caused state issues for us and generally do not work perfectly, so we have moved to using Argo for everything in the cluster and avoid having Terraform deploy Helm charts. Concretely, what would help us a lot: A quick, easy Helm chart we can deploy with Argo to run Spacelift components / private workers in our cluster. Better maintained Kubernetes and Helm provider story from your side. If you owned those and made them reliable, we would rather be using Spacelift than Argo and would happily pay for that.
💡 Feature Requests
7 days ago
🔭 Discovery
Kubernetes / Argo / Terraform providers
We’re very happy with Spacelift overall, it “just works” for us. Where we struggle is the Kubernetes side. The Terraform Kubernetes providers, especially the Helm provider, have caused state issues for us and generally do not work perfectly, so we have moved to using Argo for everything in the cluster and avoid having Terraform deploy Helm charts. Concretely, what would help us a lot: A quick, easy Helm chart we can deploy with Argo to run Spacelift components / private workers in our cluster. Better maintained Kubernetes and Helm provider story from your side. If you owned those and made them reliable, we would rather be using Spacelift than Argo and would happily pay for that.
💡 Feature Requests
7 days ago
🔭 Discovery
Agent-based stack refactor and cleanup
We have hundreds of stacks. More than half of them are in an error state. Most of those correspond to development/test infrastructure, and staffing a team to do the cleanup would be expensive. Lots of those stacks are “someone manually upgraded the database, so we need to create a PR to reflect the new DB version” or “this resource got created manually, and now the apply is failing due to the duplicate resource creation attempt.” In addition to failing stacks, some stacks are simply in disrepair. Missing imports, missing move blocks, and other defects mean that maintaining stack state is difficult. Obviously teams using Terraform should be consistent at using Terraform, rather than performing manual operations, but sometimes that’s significantly more expensive/time-consuming than clicking a button in AWS for non-production systems. There’s also a very large category of refactor-type work that’s very expensive to do manually, where you want to restructure some modules or move resources across stacks. I’d like to propose that these tasks are exceptionally appropriate for LLM agent-based workflows. We already have the technology to restrict permissions for terraform plans to dramatically minimize the risk of malicious configuration being introduced at that phase. Spacelift already has sophisticated workflow approvals. Often, the thing I want is for something or someone to operate in a loop making configuration changes and manipulating the state until terraform plan produces a plan with no delta, e.g. “No changes. Your infrastructure matches the configuration.“ That’s an incredibly clear objective to give to an LLM! I’d be thrilled if Spacelift implemented tooling to help teams achieve better resource management.
💡 Feature Requests
7 days ago
🔭 Discovery
Agent-based stack refactor and cleanup
We have hundreds of stacks. More than half of them are in an error state. Most of those correspond to development/test infrastructure, and staffing a team to do the cleanup would be expensive. Lots of those stacks are “someone manually upgraded the database, so we need to create a PR to reflect the new DB version” or “this resource got created manually, and now the apply is failing due to the duplicate resource creation attempt.” In addition to failing stacks, some stacks are simply in disrepair. Missing imports, missing move blocks, and other defects mean that maintaining stack state is difficult. Obviously teams using Terraform should be consistent at using Terraform, rather than performing manual operations, but sometimes that’s significantly more expensive/time-consuming than clicking a button in AWS for non-production systems. There’s also a very large category of refactor-type work that’s very expensive to do manually, where you want to restructure some modules or move resources across stacks. I’d like to propose that these tasks are exceptionally appropriate for LLM agent-based workflows. We already have the technology to restrict permissions for terraform plans to dramatically minimize the risk of malicious configuration being introduced at that phase. Spacelift already has sophisticated workflow approvals. Often, the thing I want is for something or someone to operate in a loop making configuration changes and manipulating the state until terraform plan produces a plan with no delta, e.g. “No changes. Your infrastructure matches the configuration.“ That’s an incredibly clear objective to give to an LLM! I’d be thrilled if Spacelift implemented tooling to help teams achieve better resource management.
💡 Feature Requests
7 days ago
⚙️ In Progress
Enable drift detection without admin permission
To enable drift detection it seems as of now it requires ADMIN permissions. With AAC could there be a role introduced for drift detection that would not require admin permissions to be set?
💡 Feature Requests
About 1 month ago
Access Control
⚙️ In Progress
Enable drift detection without admin permission
To enable drift detection it seems as of now it requires ADMIN permissions. With AAC could there be a role introduced for drift detection that would not require admin permissions to be set?
💡 Feature Requests
About 1 month ago
Access Control
🔭 Discovery
UI / UX Rain Feature: Year round Ambient Visual Effects (ex. our Holiday snow feature)
Requesting additional ambient visual effects (similar to the holiday snow animation) to be available beyond seasonal use, such as a soothing rain or nature-themed effect. The customer shared that the existing snow effect was unexpectedly calming during long-running deploys. Having a subtle, mesmerizing visual helped make waiting for several minutes feel more pleasant and less tedious. Proposed Idea Introduce optional, non-intrusive ambient UI effects (e.g., rain, nature themes) that could be enabled during long-running operations like deploys. These could be: Available year-round Optional / user-toggleable Designed to be subtle and non-distracting
💡 Feature Requests
21 days ago
UI/UX
🔭 Discovery
UI / UX Rain Feature: Year round Ambient Visual Effects (ex. our Holiday snow feature)
Requesting additional ambient visual effects (similar to the holiday snow animation) to be available beyond seasonal use, such as a soothing rain or nature-themed effect. The customer shared that the existing snow effect was unexpectedly calming during long-running deploys. Having a subtle, mesmerizing visual helped make waiting for several minutes feel more pleasant and less tedious. Proposed Idea Introduce optional, non-intrusive ambient UI effects (e.g., rain, nature themes) that could be enabled during long-running operations like deploys. These could be: Available year-round Optional / user-toggleable Designed to be subtle and non-distracting
💡 Feature Requests
21 days ago
UI/UX
🔭 Discovery
Programmatic access to Spacelift managed state
We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.
💡 Feature Requests
About 23 hours ago
Stacks
🔭 Discovery
Programmatic access to Spacelift managed state
We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.
💡 Feature Requests
About 23 hours ago
Stacks
🔭 Discovery
Customization options for Audit Trail webhook payload
The payload format of the Spacelift Audit Trail logs sent to an external endpoint should be able to be customized similar to when using a Notification Policy to send to a webhook. Further payload customization options would better enable the Audit Trail logs to be sent directly third-party observability or SIEM systems in their native format without the need for intermediary transformation or ingestion.
💡 Feature Requests
2 days ago
Observability
🔭 Discovery
Customization options for Audit Trail webhook payload
The payload format of the Spacelift Audit Trail logs sent to an external endpoint should be able to be customized similar to when using a Notification Policy to send to a webhook. Further payload customization options would better enable the Audit Trail logs to be sent directly third-party observability or SIEM systems in their native format without the need for intermediary transformation or ingestion.
💡 Feature Requests
2 days ago
Observability
⬆️ Gathering votes
Add registry pull through cache support
As a company concerned with reproducibility of my Spacelift stack runs, I would like to be able to setup a “pull through cache” style capability on my private registry. The behavior should be similar to AWSs ECR feature (https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache.html). Ideally I would be able to point at up an upstream provider via my private registry through a mapping of provider and namespace to path within our registry. For bonus points and to enable very security concerned customers, this feature should also enable policies and those policies should allow the organization to express which versions and external registries should be allowed to be automatically pulled through/updated.
💡 Feature Requests
3 days ago
Terraform registry
⬆️ Gathering votes
Add registry pull through cache support
As a company concerned with reproducibility of my Spacelift stack runs, I would like to be able to setup a “pull through cache” style capability on my private registry. The behavior should be similar to AWSs ECR feature (https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache.html). Ideally I would be able to point at up an upstream provider via my private registry through a mapping of provider and namespace to path within our registry. For bonus points and to enable very security concerned customers, this feature should also enable policies and those policies should allow the organization to express which versions and external registries should be allowed to be automatically pulled through/updated.
💡 Feature Requests
3 days ago
Terraform registry
⬆️ Gathering votes
Better observability for run times and worker utilization
We run self hosted Spacelift and would like the following observability. Run Execution Metrics End-to-end run duration (histogram): Distribution of total wall-clock time from run creation to terminal state. Enables tracking p50/p90/p99 durations, detecting regressions, and setting SLOs on deployment latency. Should support labels for stack, space, run type, and terminal state. Worker Pool Metrics Runs queued for workers (gauge): Count of runs currently waiting for a worker, queryable over time. Enables alerting on queue saturation and right-sizing worker pools. Should support labels for stack, space, and worker pool. Per-run worker wait time (histogram): Distribution of time each run spends waiting for a worker before execution begins. Should support labels for stack, space, and worker pool.
💡 Feature Requests
4 days ago
Workers
⬆️ Gathering votes
Better observability for run times and worker utilization
We run self hosted Spacelift and would like the following observability. Run Execution Metrics End-to-end run duration (histogram): Distribution of total wall-clock time from run creation to terminal state. Enables tracking p50/p90/p99 durations, detecting regressions, and setting SLOs on deployment latency. Should support labels for stack, space, run type, and terminal state. Worker Pool Metrics Runs queued for workers (gauge): Count of runs currently waiting for a worker, queryable over time. Enables alerting on queue saturation and right-sizing worker pools. Should support labels for stack, space, and worker pool. Per-run worker wait time (histogram): Distribution of time each run spends waiting for a worker before execution begins. Should support labels for stack, space, and worker pool.
💡 Feature Requests
4 days ago
Workers
Drift detection notifications per stack owner or team label
Ability to route drift notifications based on stack level metadata. For example, each stack could have a label or owner field that maps to a Slack channel, alias or user list. When drift is detected, the system would send a notification directly to the right team instead of relying on a single global webhook. This is especially important where many teams own their own stacks.
💡 Feature Requests
23 days ago
Drift detection notifications per stack owner or team label
Ability to route drift notifications based on stack level metadata. For example, each stack could have a label or owner field that maps to a Slack channel, alias or user list. When drift is detected, the system would send a notification directly to the right team instead of relying on a single global webhook. This is especially important where many teams own their own stacks.
💡 Feature Requests
23 days ago
Filter resources by attributes in the resources view
Ability to filter resources by their attributes, not only by IaC resource name. For example, I want to list all S3 buckets with a specific encryption configuration, or all resources with or without a given parameter. The data is already there in the inventory but today I need to click each resource, open the JSON on the right, and search manually. A filter on key and value would already solve most of this.
💡 Feature Requests
23 days ago
Filter resources by attributes in the resources view
Ability to filter resources by their attributes, not only by IaC resource name. For example, I want to list all S3 buckets with a specific encryption configuration, or all resources with or without a given parameter. The data is already there in the inventory but today I need to click each resource, open the JSON on the right, and search manually. A filter on key and value would already solve most of this.
💡 Feature Requests
23 days ago
Reverse lookup: find stack by cloud resource
Ability to start from a cloud resource identifier or name and quickly find which stack manages it. Example: given an IAM role or S3 bucket name from an event or CloudTrail, I want to search in the resources view and see which stack is responsible for that resource. This makes it much easier to investigate manual changes, alerts and incidents that originate from the cloud side.
💡 Feature Requests
23 days ago
Reverse lookup: find stack by cloud resource
Ability to start from a cloud resource identifier or name and quickly find which stack manages it. Example: given an IAM role or S3 bucket name from an event or CloudTrail, I want to search in the resources view and see which stack is responsible for that resource. This makes it much easier to investigate manual changes, alerts and incidents that originate from the cloud side.
💡 Feature Requests
23 days ago
⚙️ In Progress
Support Terragrunt run-all with Spacelift Native State Management
Spacelift integrates with Terragrunt, but when using Terragrunt’s run-all functionality, Terraform state cannot be managed by Spacelift’s native state management. This limits the effectiveness of the integration for teams using Terragrunt to orchestrate multi-module and dependency-driven deployments.
💡 Feature Requests
About 1 month ago
Integrations
⚙️ In Progress
Support Terragrunt run-all with Spacelift Native State Management
Spacelift integrates with Terragrunt, but when using Terragrunt’s run-all functionality, Terraform state cannot be managed by Spacelift’s native state management. This limits the effectiveness of the integration for teams using Terragrunt to orchestrate multi-module and dependency-driven deployments.
💡 Feature Requests
About 1 month ago
Integrations
⬆️ Gathering votes
Explorer-like inventory of providers/modules (and versions) across Stacks
It would be really helpful if Spacelift had an Explorer-like inventory view (similar to Terraform Cloud → https://developer.hashicorp.com/terraform/cloud-docs/workspaces/explorer ) showing providers/modules and their versions across all stacks. This would make it easier to : understand how far behind the stacks are see how spread out the module versions are across stacks Nice-to-have additions: - filters by stack/space/environment - charts showing version distribution - export to CSV - click-through to affected stacks
💡 Feature Requests
About 1 month ago
UI/UX
⬆️ Gathering votes
Explorer-like inventory of providers/modules (and versions) across Stacks
It would be really helpful if Spacelift had an Explorer-like inventory view (similar to Terraform Cloud → https://developer.hashicorp.com/terraform/cloud-docs/workspaces/explorer ) showing providers/modules and their versions across all stacks. This would make it easier to : understand how far behind the stacks are see how spread out the module versions are across stacks Nice-to-have additions: - filters by stack/space/environment - charts showing version distribution - export to CSV - click-through to affected stacks
💡 Feature Requests
About 1 month ago
UI/UX
🔭 Discovery
Option on spacelift_run resource to continue even if target stack is disabled
The spacelift_run resource has an optional wait block that allows it to continue if the run is in a defined run state, and to continue if it did not reach any defined end state. I would like to configure the resource to also continue if a run cannot be triggered because the target stack is disabled.
💡 Feature Requests
15 days ago
Spacelift Provider
🔭 Discovery
Option on spacelift_run resource to continue even if target stack is disabled
The spacelift_run resource has an optional wait block that allows it to continue if the run is in a defined run state, and to continue if it did not reach any defined end state. I would like to configure the resource to also continue if a run cannot be triggered because the target stack is disabled.
💡 Feature Requests
15 days ago
Spacelift Provider
Display Raw Terraform Output Only
Not sure if it’s a change that needs to happen in Spacectl or spacelift itself. Any spacectl command that outputs logs (ie spacectl stack deploy or spacectl stack logs ) displays the output of Spacelift and Terraform. It’d be useful if we could optionally abstract all spacelift specific logs and only show the raw Terraform Output only. And possibly maybe a final line stating full spacelift logs can be seen on spacelift and link the run URL.
💡 Feature Requests
15 days ago
Spacectl
Display Raw Terraform Output Only
Not sure if it’s a change that needs to happen in Spacectl or spacelift itself. Any spacectl command that outputs logs (ie spacectl stack deploy or spacectl stack logs ) displays the output of Spacelift and Terraform. It’d be useful if we could optionally abstract all spacelift specific logs and only show the raw Terraform Output only. And possibly maybe a final line stating full spacelift logs can be seen on spacelift and link the run URL.
💡 Feature Requests
15 days ago
Spacectl
➡️ Planned
RBAC for module delete-version
It’s currently not possible to delete a module version without the space_admin permission on the root space. I would like to see a permission like MODULE_DELETE_VERSION available in RBAC.
💡 Feature Requests
16 days ago
Access Control
➡️ Planned
RBAC for module delete-version
It’s currently not possible to delete a module version without the space_admin permission on the root space. I would like to see a permission like MODULE_DELETE_VERSION available in RBAC.
💡 Feature Requests
16 days ago
Access Control