🔭 Discovery
Allow merging multiple stack notifications into common GitHub PR comment
We sometimes get PRs which affect many different Stacks. Because each Stack posts an individual comment with the proposed run status, the PR conversation section can become bloated and basically unusable. Also, we have hit GitHub API rate limits and the comments may have contributed here. We would like there to be an option to merge proposed runs into a single PR comment which would gather all proposed runs for given PR. We keep comment content quite short, so the GitHub comment character limit should not be an issue.
💡 Feature Requests
20 days ago
Notifications
🔭 Discovery
Allow merging multiple stack notifications into common GitHub PR comment
We sometimes get PRs which affect many different Stacks. Because each Stack posts an individual comment with the proposed run status, the PR conversation section can become bloated and basically unusable. Also, we have hit GitHub API rate limits and the comments may have contributed here. We would like there to be an option to merge proposed runs into a single PR comment which would gather all proposed runs for given PR. We keep comment content quite short, so the GitHub comment character limit should not be an issue.
💡 Feature Requests
20 days ago
Notifications
Module registry: add version lifecycle states with optional sunset date
“Mark version as bad” is informational only. Enterprise customers need a structured lifecycle so module authors can deprecate versions with a grace period and then block unsupported versions without maintaining brittle external OPA logic. In many organizations, infrastructure patterns are encapsulated into approved modules (e.g., networking, S3 buckets, IAM roles, etc.). When these patterns are modularized, developers can safely self-serve infrastructure by consuming those modules rather than building resources directly. Proposed solution Add first-class lifecycle state for each module version: active deprecated (optional sunset_date) unsupported Ideal behavior Using a deprecated version: plan succeeds but emits a warning stating it’s deprecated, recommended version, and sunset date (if set). Using an unsupported version: plan fails (hard stop). Acceptance criteria Registry UI shows lifecycle state per version and (if applicable) sunset date. Lifecycle state is persisted per version and queryable via GraphQL. Deprecated usage generates a warning surfaced in the run. Unsupported usage blocks the run automatically.
💡 Feature Requests
14 days ago
Module registry: add version lifecycle states with optional sunset date
“Mark version as bad” is informational only. Enterprise customers need a structured lifecycle so module authors can deprecate versions with a grace period and then block unsupported versions without maintaining brittle external OPA logic. In many organizations, infrastructure patterns are encapsulated into approved modules (e.g., networking, S3 buckets, IAM roles, etc.). When these patterns are modularized, developers can safely self-serve infrastructure by consuming those modules rather than building resources directly. Proposed solution Add first-class lifecycle state for each module version: active deprecated (optional sunset_date) unsupported Ideal behavior Using a deprecated version: plan succeeds but emits a warning stating it’s deprecated, recommended version, and sunset date (if set). Using an unsupported version: plan fails (hard stop). Acceptance criteria Registry UI shows lifecycle state per version and (if applicable) sunset date. Lifecycle state is persisted per version and queryable via GraphQL. Deprecated usage generates a warning surfaced in the run. Unsupported usage blocks the run automatically.
💡 Feature Requests
14 days ago
🔭 Discovery
External secrets and certificates from Key Vault
Today, we have stack specific secrets that live in Azure Key Vault. To use them in Spacelift, we end up duplicating them into a Spacelift context or stack environment variables, so we have to maintain the same value both in Key Vault and in Spacelift. That creates extra work, increases the chance of drift, and makes rotation harder. What I would like is a native way in Spacelift to reference an external secret store, starting with Azure Key Vault. For example, instead of pasting the value into a context, I want to be able to define something like “this variable comes from Key Vault secret X” and have Spacelift fetch it at runtime using the stack’s identity, service principal, or managed identity. This is similar to how Azure DevOps variable groups can pull from Key Vault, if the identity has access, the secret becomes available as a variable during the run.
💡 Feature Requests
2 days ago
OpenTofu
🔭 Discovery
External secrets and certificates from Key Vault
Today, we have stack specific secrets that live in Azure Key Vault. To use them in Spacelift, we end up duplicating them into a Spacelift context or stack environment variables, so we have to maintain the same value both in Key Vault and in Spacelift. That creates extra work, increases the chance of drift, and makes rotation harder. What I would like is a native way in Spacelift to reference an external secret store, starting with Azure Key Vault. For example, instead of pasting the value into a context, I want to be able to define something like “this variable comes from Key Vault secret X” and have Spacelift fetch it at runtime using the stack’s identity, service principal, or managed identity. This is similar to how Azure DevOps variable groups can pull from Key Vault, if the identity has access, the secret becomes available as a variable during the run.
💡 Feature Requests
2 days ago
OpenTofu
🔭 Discovery
Notification Policies Access to Variables
We use stack contexts and environment variables to store useful metadata, such as the version being deployed. However, notification policies do not have access to those context variables. Because of this, we have to run custom scripts before the plan step to expose values as flags, just so notifications can read them. It would be much cleaner if notification policies had access to the same variables that plan policies do, including context-attached environment variables. This would remove the need for workarounds and simplify our notification logic.
💡 Feature Requests
3 days ago
🔭 Discovery
Notification Policies Access to Variables
We use stack contexts and environment variables to store useful metadata, such as the version being deployed. However, notification policies do not have access to those context variables. Because of this, we have to run custom scripts before the plan step to expose values as flags, just so notifications can read them. It would be much cleaner if notification policies had access to the same variables that plan policies do, including context-attached environment variables. This would remove the need for workarounds and simplify our notification logic.
💡 Feature Requests
3 days ago
🔭 Discovery
Expose Full Dependency Tree and Root Run ID in Policies
We are heavily using stack dependencies, and when one stack triggers many others, we need visibility into the full dependency chain inside policies. Right now, we only have access to limited upstream information. It would be very helpful if every policy type, including notification policies, had access to the full dependency tree and the root run ID. This would allow us to: Track who originally started the run Thread Slack notifications properly Reference the same root run across all dependent stacks Build cleaner approval and notification logic Currently, we have to rely on workarounds and custom scripting. Having full dependency visibility in policies would simplify our setup significantly.
💡 Feature Requests
3 days ago
🔭 Discovery
Expose Full Dependency Tree and Root Run ID in Policies
We are heavily using stack dependencies, and when one stack triggers many others, we need visibility into the full dependency chain inside policies. Right now, we only have access to limited upstream information. It would be very helpful if every policy type, including notification policies, had access to the full dependency tree and the root run ID. This would allow us to: Track who originally started the run Thread Slack notifications properly Reference the same root run across all dependent stacks Build cleaner approval and notification logic Currently, we have to rely on workarounds and custom scripting. Having full dependency visibility in policies would simplify our setup significantly.
💡 Feature Requests
3 days ago
🔭 Discovery
Default worker pool at space or organization level
It would be helpful to set a private workerpool as the default for all stacks organization-wide.
💡 Feature Requests
6 days ago
Workers
🔭 Discovery
Default worker pool at space or organization level
It would be helpful to set a private workerpool as the default for all stacks organization-wide.
💡 Feature Requests
6 days ago
Workers
⬆️ Gathering votes
Expose Busy / Queue length Worker Pool Metrics
I would like to be able to systematically monitor our worker pool to get quantitative data on our internal developer experience as our engineers all share the worker pool to run their IaC. I would like to use those metrics to improve decision making on the number of workers we need for our organizations needs.
💡 Feature Requests
16 days ago
Observability
⬆️ Gathering votes
Expose Busy / Queue length Worker Pool Metrics
I would like to be able to systematically monitor our worker pool to get quantitative data on our internal developer experience as our engineers all share the worker pool to run their IaC. I would like to use those metrics to improve decision making on the number of workers we need for our organizations needs.
💡 Feature Requests
16 days ago
Observability
🔭 Discovery
Support dynamic VCS repository input in Templates
Templates currently resolve the VCS repository field at publish time, meaning the repository is baked into the template version and can't be provided as a dynamic input. This prevents using Templates for use cases where the same stack configuration needs to be deployed across multiple repositories - e.g. a "Stack Vendor" template that engineers deploy from to onboard their repo.
💡 Feature Requests
8 days ago
Stacks
🔭 Discovery
Support dynamic VCS repository input in Templates
Templates currently resolve the VCS repository field at publish time, meaning the repository is baked into the template version and can't be provided as a dynamic input. This prevents using Templates for use cases where the same stack configuration needs to be deployed across multiple repositories - e.g. a "Stack Vendor" template that engineers deploy from to onboard their repo.
💡 Feature Requests
8 days ago
Stacks
🔭 Discovery
Better support for adhoc ansible runs
As an infrastructure owner, I would like to be able to execute arbitrary ansible playbooks using an existing ansible stack. Spacelift currently locks each stack to a single playbook, which makes it difficult to make use of ansible’s full capabilities for managing the operating systems and applications on our EC2 infrastructure.
💡 Feature Requests
28 days ago
🔭 Discovery
Better support for adhoc ansible runs
As an infrastructure owner, I would like to be able to execute arbitrary ansible playbooks using an existing ansible stack. Spacelift currently locks each stack to a single playbook, which makes it difficult to make use of ansible’s full capabilities for managing the operating systems and applications on our EC2 infrastructure.
💡 Feature Requests
28 days ago
🔭 Discovery
Enable downloading of a stack's state changelog
An easy way to export stack state change logs for auditing. e.g., Something like an export of the State History?
💡 Feature Requests
10 days ago
Observability
🔭 Discovery
Enable downloading of a stack's state changelog
An easy way to export stack state change logs for auditing. e.g., Something like an export of the State History?
💡 Feature Requests
10 days ago
Observability
🔭 Discovery
Execute Ansible-vendor tracked runs in stack concurrent
As a platform engineer, we are building a platform (currently in proof-of-concept stage) that uses Ansible-vendored Spacelift stacks to execute Ansible playbooks. These stack runs are triggered by orchestration via API such that each stack run executes against one EC2 instance as they are launched. Due to the blocking nature of stack runs, this limits the stacks ability to run multiple Ansible playbooks concurrently for a stack. Tasks are not a valid alternative as they do not provide any benefit to the resource/host configuration management natively built into Spacelift stacks. Jubran Nassar is familiar with our use-case. If we end up selecting Spacelift as our tool for our platform, this will be important to solve, but marking as nice to have for now. We should know more in the next few weeks.
💡 Feature Requests
22 days ago
🔭 Discovery
Execute Ansible-vendor tracked runs in stack concurrent
As a platform engineer, we are building a platform (currently in proof-of-concept stage) that uses Ansible-vendored Spacelift stacks to execute Ansible playbooks. These stack runs are triggered by orchestration via API such that each stack run executes against one EC2 instance as they are launched. Due to the blocking nature of stack runs, this limits the stacks ability to run multiple Ansible playbooks concurrently for a stack. Tasks are not a valid alternative as they do not provide any benefit to the resource/host configuration management natively built into Spacelift stacks. Jubran Nassar is familiar with our use-case. If we end up selecting Spacelift as our tool for our platform, this will be important to solve, but marking as nice to have for now. We should know more in the next few weeks.
💡 Feature Requests
22 days ago
Allow MCP-based integrations with Spacelift without requiring spacectl
Problem Spacelift currently provides MCP functionality through the spacectl MCP server. To use MCP with Spacelift today, users must: Install spacectl Authenticate using spacectl profile login Configure their coding assistant to run: spacectl mcp server Requiring a CLI purely to support MCP integrations makes it harder to integrate Spacelift into internal platforms that standardize on API-driven MCP servers. Proposed Solution Provide a way to integrate Spacelift into MCP-based AI systems without requiring spacectl. This could be achieved in one of two ways: Option 1 Provide an official Spacelift MCP server that communicates directly with the Spacelift API / GraphQL. Option 2 Provide documented integration patterns that allow customers to easily build their own MCP servers using the existing GraphQL API. Ideal Capabilities An MCP integration should allow AI tools to interact with Spacelift capabilities such as: discovering modules in the Spacelift module registry retrieving module metadata (inputs, outputs, examples) identifying the latest module version generating infrastructure code using approved modules This enables AI assistants to generate infrastructure that aligns with an organization's approved module ecosystem. Customer Value Many organizations are adopting AI-assisted infrastructure development. When infrastructure patterns are encapsulated into approved modules, AI tools can safely generate infrastructure code by discovering and using those modules. Allowing MCP integrations without requiring a CLI dependency would make it easier for organizations to integrate Spacelift into internal developer platforms where multiple systems are connected through MCP servers and APIs.
💡 Feature Requests
14 days ago
Allow MCP-based integrations with Spacelift without requiring spacectl
Problem Spacelift currently provides MCP functionality through the spacectl MCP server. To use MCP with Spacelift today, users must: Install spacectl Authenticate using spacectl profile login Configure their coding assistant to run: spacectl mcp server Requiring a CLI purely to support MCP integrations makes it harder to integrate Spacelift into internal platforms that standardize on API-driven MCP servers. Proposed Solution Provide a way to integrate Spacelift into MCP-based AI systems without requiring spacectl. This could be achieved in one of two ways: Option 1 Provide an official Spacelift MCP server that communicates directly with the Spacelift API / GraphQL. Option 2 Provide documented integration patterns that allow customers to easily build their own MCP servers using the existing GraphQL API. Ideal Capabilities An MCP integration should allow AI tools to interact with Spacelift capabilities such as: discovering modules in the Spacelift module registry retrieving module metadata (inputs, outputs, examples) identifying the latest module version generating infrastructure code using approved modules This enables AI assistants to generate infrastructure that aligns with an organization's approved module ecosystem. Customer Value Many organizations are adopting AI-assisted infrastructure development. When infrastructure patterns are encapsulated into approved modules, AI tools can safely generate infrastructure code by discovering and using those modules. Allowing MCP integrations without requiring a CLI dependency would make it easier for organizations to integrate Spacelift into internal developer platforms where multiple systems are connected through MCP servers and APIs.
💡 Feature Requests
14 days ago
Allow manage runs with spacectl
Right now, via UI, we can access directly to the list of runs for a group of workers. This is very useful to bulk manage the runs based on different filters (sha, drift, proposed). When using spacectl, the only way to list runs is iterating over the stacks to get the runs of each stack. It’d be extremely useful to add the run management in spacectl similar to what we can do now with the UI
💡 Feature Requests
About 10 hours ago
Spacectl
Allow manage runs with spacectl
Right now, via UI, we can access directly to the list of runs for a group of workers. This is very useful to bulk manage the runs based on different filters (sha, drift, proposed). When using spacectl, the only way to list runs is iterating over the stacks to get the runs of each stack. It’d be extremely useful to add the run management in spacectl similar to what we can do now with the UI
💡 Feature Requests
About 10 hours ago
Spacectl
🔭 Discovery
Cycle Individual Worker
Worker Pools currently have a drain functionality, it’d be useful if it could also cycle individual workers instead of requiring me to cycle the entire pool or manually terminate instances.
💡 Feature Requests
1 day ago
🔭 Discovery
Cycle Individual Worker
Worker Pools currently have a drain functionality, it’d be useful if it could also cycle individual workers instead of requiring me to cycle the entire pool or manually terminate instances.
💡 Feature Requests
1 day ago
Pending Runs Counter should exclude drift detection runs
When users are looking at a proposed or tracked run that is waiting for an available worker, it should not include drift detection runs in the queue length. These runs are de-prioritized against all other runs so are not actually blocking the users spacelift run.
📝 Feedback
2 days ago
Pending Runs Counter should exclude drift detection runs
When users are looking at a proposed or tracked run that is waiting for an available worker, it should not include drift detection runs in the queue length. These runs are de-prioritized against all other runs so are not actually blocking the users spacelift run.
📝 Feedback
2 days ago
⬆️ Gathering votes
Add explicit version to plugin
As a user of the plugin system, I want to update plugins to new versions seamlessly (without deleting/reinstalling) and specify versions in my Tofu provider configuration, So that I can manage dependencies efficiently, ensure compatibility, and reduce manual overhead.
💡 Feature Requests
17 days ago
⬆️ Gathering votes
Add explicit version to plugin
As a user of the plugin system, I want to update plugins to new versions seamlessly (without deleting/reinstalling) and specify versions in my Tofu provider configuration, So that I can manage dependencies efficiently, ensure compatibility, and reduce manual overhead.
💡 Feature Requests
17 days ago
➡️ Planned
spacectl: Add --wait / exit-on-complete for stack tasks
In agent workflows, models often use spacectl stack task --tail, which doesn’t exit and stalls agents. Add a --wait (or similar) mode that returns when the task completes (with success/failure exit code) and document it as the recommended option for automation/agents. Example: spacectl stack task --id --wait "terraform import ..." exits when done; --tail remains for interactive streaming.
💡 Feature Requests
27 days ago
Spacectl
➡️ Planned
spacectl: Add --wait / exit-on-complete for stack tasks
In agent workflows, models often use spacectl stack task --tail, which doesn’t exit and stalls agents. Add a --wait (or similar) mode that returns when the task completes (with success/failure exit code) and document it as the recommended option for automation/agents. Example: spacectl stack task --id --wait "terraform import ..." exits when done; --tail remains for interactive streaming.
💡 Feature Requests
27 days ago
Spacectl
⬆️ Gathering votes
MCP: Reduce token-heavy outputs + document tool output format
MCP tools can be inefficient with token usage. For example, List Resources may return extremely large outputs (hundreds of thousands of characters / thousands of lines), exceeding model tool limits and causing agents to resort to brittle grepping/jq workarounds. Please introduce more token-safe defaults (pagination/limits/filtered output) and improve documentation explaining the tool output format so this doesn’t happen. Example: Listing resources could return a small summary by default instead of 14k+ lines.
💡 Feature Requests
27 days ago
Spacectl
⬆️ Gathering votes
MCP: Reduce token-heavy outputs + document tool output format
MCP tools can be inefficient with token usage. For example, List Resources may return extremely large outputs (hundreds of thousands of characters / thousands of lines), exceeding model tool limits and causing agents to resort to brittle grepping/jq workarounds. Please introduce more token-safe defaults (pagination/limits/filtered output) and improve documentation explaining the tool output format so this doesn’t happen. Example: Listing resources could return a small summary by default instead of 14k+ lines.
💡 Feature Requests
27 days ago
Spacectl
🔭 Discovery
Terraform and Argo CD Deployment Orchestration
We would like a way to deploy infrastructure and application changes together in a single flow. Today, infra changes go through Spacelift (Terraform) while app changes are handled separately via Argo CD, which forces developers to update multiple repos and tools for one release. We’re looking for orchestration or integration with Argo CD so Terraform runs and app deployments can be coordinated, without replacing their existing GitOps setup.
💡 Feature Requests
About 1 month ago
🔭 Discovery
Terraform and Argo CD Deployment Orchestration
We would like a way to deploy infrastructure and application changes together in a single flow. Today, infra changes go through Spacelift (Terraform) while app changes are handled separately via Argo CD, which forces developers to update multiple repos and tools for one release. We’re looking for orchestration or integration with Argo CD so Terraform runs and app deployments can be coordinated, without replacing their existing GitOps setup.
💡 Feature Requests
About 1 month ago
🔭 Discovery
Programmatic access to Spacelift managed state
We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.
💡 Feature Requests
About 1 month ago
Stacks
🔭 Discovery
Programmatic access to Spacelift managed state
We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.
💡 Feature Requests
About 1 month ago
Stacks