Feature Requests

Got an idea for a feature request? Let us know! Share your ideas on improving existing features or suggest something new. Vote on ideas you find useful!

Make sure to read our guidelines before posting 📖

🔭 Discovery

Execute Ansible-vendor tracked runs in stack concurrent

As a platform engineer, we are building a platform (currently in proof-of-concept stage) that uses Ansible-vendored Spacelift stacks to execute Ansible playbooks. These stack runs are triggered by orchestration via API such that each stack run executes against one EC2 instance as they are launched. Due to the blocking nature of stack runs, this limits the stacks ability to run multiple Ansible playbooks concurrently for a stack. Tasks are not a valid alternative as they do not provide any benefit to the resource/host configuration management natively built into Spacelift stacks. Jubran Nassar is familiar with our use-case. If we end up selecting Spacelift as our tool for our platform, this will be important to solve, but marking as nice to have for now. We should know more in the next few weeks.

💡 Feature Requests

8 days ago

1

🔭 Discovery

Programmatic access to Spacelift managed state

We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.

💡 Feature Requests

21 days ago

7

Allow MCP-based integrations with Spacelift without requiring spacectl

Problem Spacelift currently provides MCP functionality through the spacectl MCP server. To use MCP with Spacelift today, users must: Install spacectl Authenticate using spacectl profile login Configure their coding assistant to run: spacectl mcp server Requiring a CLI purely to support MCP integrations makes it harder to integrate Spacelift into internal platforms that standardize on API-driven MCP servers. Proposed Solution Provide a way to integrate Spacelift into MCP-based AI systems without requiring spacectl. This could be achieved in one of two ways: Option 1 Provide an official Spacelift MCP server that communicates directly with the Spacelift API / GraphQL. Option 2 Provide documented integration patterns that allow customers to easily build their own MCP servers using the existing GraphQL API. Ideal Capabilities An MCP integration should allow AI tools to interact with Spacelift capabilities such as: discovering modules in the Spacelift module registry retrieving module metadata (inputs, outputs, examples) identifying the latest module version generating infrastructure code using approved modules This enables AI assistants to generate infrastructure that aligns with an organization's approved module ecosystem. Customer Value Many organizations are adopting AI-assisted infrastructure development. When infrastructure patterns are encapsulated into approved modules, AI tools can safely generate infrastructure code by discovering and using those modules. Allowing MCP integrations without requiring a CLI dependency would make it easier for organizations to integrate Spacelift into internal developer platforms where multiple systems are connected through MCP servers and APIs.

💡 Feature Requests

About 7 hours ago

Module registry: add version lifecycle states with optional sunset date

“Mark version as bad” is informational only. Enterprise customers need a structured lifecycle so module authors can deprecate versions with a grace period and then block unsupported versions without maintaining brittle external OPA logic. In many organizations, infrastructure patterns are encapsulated into approved modules (e.g., networking, S3 buckets, IAM roles, etc.). When these patterns are modularized, developers can safely self-serve infrastructure by consuming those modules rather than building resources directly. Proposed solution Add first-class lifecycle state for each module version: active deprecated (optional sunset_date) unsupported Ideal behavior Using a deprecated version: plan succeeds but emits a warning stating it’s deprecated, recommended version, and sunset date (if set). Using an unsupported version: plan fails (hard stop). Acceptance criteria Registry UI shows lifecycle state per version and (if applicable) sunset date. Lifecycle state is persisted per version and queryable via GraphQL. Deprecated usage generates a warning surfaced in the run. Unsupported usage blocks the run automatically.

💡 Feature Requests

About 7 hours ago

🔭 Discovery

Include the run_id of an upstream stack, and any other metadata in policy inputs.

We currently have a stack that when triggered triggers it’s dependencies. We want to create one notification policy and have a unique id that we can use for the entire dependency chain since run.id is per stack. I tried to have the upstream’s stacks run_id as an output, but when dependent stacks get triggered the policies do not contain this information, nor the run_id of the upstream stack. For our purposes just a triggered_by_run_id would be perfect, but in general the feature is to include all possible metadata. Especially on the inputs the stack received from the output of an upstream stack. Something like inputs.triggered_by_outputs would be nice too.

💡 Feature Requests

16 days ago

3