Feature Requests

Got an idea for a feature request? Let us know! Share your ideas on improving existing features or suggest something new. Vote on ideas you find useful!

Make sure to read our guidelines before posting 📖

Module registry: add version lifecycle states with optional sunset date

“Mark version as bad” is informational only. Enterprise customers need a structured lifecycle so module authors can deprecate versions with a grace period and then block unsupported versions without maintaining brittle external OPA logic. In many organizations, infrastructure patterns are encapsulated into approved modules (e.g., networking, S3 buckets, IAM roles, etc.). When these patterns are modularized, developers can safely self-serve infrastructure by consuming those modules rather than building resources directly. Proposed solution Add first-class lifecycle state for each module version: active deprecated (optional sunset_date) unsupported Ideal behavior Using a deprecated version: plan succeeds but emits a warning stating it’s deprecated, recommended version, and sunset date (if set). Using an unsupported version: plan fails (hard stop). Acceptance criteria Registry UI shows lifecycle state per version and (if applicable) sunset date. Lifecycle state is persisted per version and queryable via GraphQL. Deprecated usage generates a warning surfaced in the run. Unsupported usage blocks the run automatically.

💡 Feature Requests

14 days ago

1

🔭 Discovery

External secrets and certificates from Key Vault

Today, we have stack specific secrets that live in Azure Key Vault. To use them in Spacelift, we end up duplicating them into a Spacelift context or stack environment variables, so we have to maintain the same value both in Key Vault and in Spacelift. That creates extra work, increases the chance of drift, and makes rotation harder. What I would like is a native way in Spacelift to reference an external secret store, starting with Azure Key Vault. For example, instead of pasting the value into a context, I want to be able to define something like “this variable comes from Key Vault secret X” and have Spacelift fetch it at runtime using the stack’s identity, service principal, or managed identity. This is similar to how Azure DevOps variable groups can pull from Key Vault, if the identity has access, the secret becomes available as a variable during the run.

💡 Feature Requests

2 days ago

4

🔭 Discovery

Expose Full Dependency Tree and Root Run ID in Policies

We are heavily using stack dependencies, and when one stack triggers many others, we need visibility into the full dependency chain inside policies. Right now, we only have access to limited upstream information. It would be very helpful if every policy type, including notification policies, had access to the full dependency tree and the root run ID. This would allow us to: Track who originally started the run Thread Slack notifications properly Reference the same root run across all dependent stacks Build cleaner approval and notification logic Currently, we have to rely on workarounds and custom scripting. Having full dependency visibility in policies would simplify our setup significantly.

💡 Feature Requests

3 days ago

2

🔭 Discovery

Execute Ansible-vendor tracked runs in stack concurrent

As a platform engineer, we are building a platform (currently in proof-of-concept stage) that uses Ansible-vendored Spacelift stacks to execute Ansible playbooks. These stack runs are triggered by orchestration via API such that each stack run executes against one EC2 instance as they are launched. Due to the blocking nature of stack runs, this limits the stacks ability to run multiple Ansible playbooks concurrently for a stack. Tasks are not a valid alternative as they do not provide any benefit to the resource/host configuration management natively built into Spacelift stacks. Jubran Nassar is familiar with our use-case. If we end up selecting Spacelift as our tool for our platform, this will be important to solve, but marking as nice to have for now. We should know more in the next few weeks.

💡 Feature Requests

22 days ago

2

Allow MCP-based integrations with Spacelift without requiring spacectl

Problem Spacelift currently provides MCP functionality through the spacectl MCP server. To use MCP with Spacelift today, users must: Install spacectl Authenticate using spacectl profile login Configure their coding assistant to run: spacectl mcp server Requiring a CLI purely to support MCP integrations makes it harder to integrate Spacelift into internal platforms that standardize on API-driven MCP servers. Proposed Solution Provide a way to integrate Spacelift into MCP-based AI systems without requiring spacectl. This could be achieved in one of two ways: Option 1 Provide an official Spacelift MCP server that communicates directly with the Spacelift API / GraphQL. Option 2 Provide documented integration patterns that allow customers to easily build their own MCP servers using the existing GraphQL API. Ideal Capabilities An MCP integration should allow AI tools to interact with Spacelift capabilities such as: discovering modules in the Spacelift module registry retrieving module metadata (inputs, outputs, examples) identifying the latest module version generating infrastructure code using approved modules This enables AI assistants to generate infrastructure that aligns with an organization's approved module ecosystem. Customer Value Many organizations are adopting AI-assisted infrastructure development. When infrastructure patterns are encapsulated into approved modules, AI tools can safely generate infrastructure code by discovering and using those modules. Allowing MCP integrations without requiring a CLI dependency would make it easier for organizations to integrate Spacelift into internal developer platforms where multiple systems are connected through MCP servers and APIs.

💡 Feature Requests

14 days ago

🔭 Discovery

Programmatic access to Spacelift managed state

We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.

💡 Feature Requests

About 1 month ago

7