Feature Requests

Got an idea for a feature request? Let us know! Share your ideas on improving existing features or suggest something new. Vote on ideas you find useful!

Make sure to read our guidelines before posting 📖

Execute Ansible-vendor tracked runs in stack concurrent

As a platform engineer, we are building a platform (currently in proof-of-concept stage) that uses Ansible-vendored Spacelift stacks to execute Ansible playbooks. These stack runs are triggered by orchestration via API such that each stack run executes against one EC2 instance as they are launched. Due to the blocking nature of stack runs, this limits the stacks ability to run multiple Ansible playbooks concurrently for a stack. Tasks are not a valid alternative as they do not provide any benefit to the resource/host configuration management natively built into Spacelift stacks. Jubran Nassar is familiar with our use-case. If we end up selecting Spacelift as our tool for our platform, this will be important to solve, but marking as nice to have for now. We should know more in the next few weeks.

💡 Feature Requests

4 days ago

🔭 Discovery

Programmatic access to Spacelift managed state

We have thousands of stacks. The definition of the stacks is multilayered. (I.e. we have a “base”. The on top of the base, we build a “foo” and “bar” type. Then, for example, we could add an “aaa” on top of foo. And then, eventually, we create a stack on top of those other definitions.) This works fine in most cases. But, when we have to make a change to the “base” layer, we have to manually view any proposed changes. For even a dozen or so, this becomes painful. It is completely untenable to do that for hundreds, to say nothing about 1000s. This would be much easier if we had a way to reference the state locally. Thus, we could do a local terraform plan just to figure out the scope of a proposed change. To be clear; I am only talking about “read only” access to the state. Ideally, we could annotate that TF somehow so that we did NOT have to make any change to run tf plan locally. And however it was set up, it would not interfere with the normal “file injection” Spacelift does. FTR; I am aware of: https://docs.spacelift.io/vendors/terraform/state-management#exporting-spacelift-managed-terraform-state-file Doing that for dozens of states/stacks is EXTREMELY time consuming. And it would be a non-starter for hundreds or more.

💡 Feature Requests

17 days ago

7

🔭 Discovery

Include the run_id of an upstream stack, and any other metadata in policy inputs.

We currently have a stack that when triggered triggers it’s dependencies. We want to create one notification policy and have a unique id that we can use for the entire dependency chain since run.id is per stack. I tried to have the upstream’s stacks run_id as an output, but when dependent stacks get triggered the policies do not contain this information, nor the run_id of the upstream stack. For our purposes just a triggered_by_run_id would be perfect, but in general the feature is to include all possible metadata. Especially on the inputs the stack received from the output of an upstream stack. Something like inputs.triggered_by_outputs would be nice too.

💡 Feature Requests

12 days ago

3

Notification Policy - Pull Request comment not auto-resolved

By default, Pull Request comments (as outlined in the docs: https://docs.spacelift.io/concepts/policy/notification-policy#creating-a-pr-comment) are auto-resolved/closed - at least on Azure DevOps. This default behaviour is fine for the built-in PR integration which simply reports proposed changes on a PR run back to the Azure DevOps UI itself. However, if the Notification policy is being used to flag something which requires user input, it would be useful to be able to turn off this auto-resolution behaviour. That way the comment would act like a normal user comment which needs some action (or discussion) to resolve. Without this, comments can be ignored and changes can be merged without any action being taken. On Azure DevOps this is enforced under Repos > Policy > Check for comment resolution. On Github is looks like there is a “Require conversation resolution before merging” setting. Something like this could be implemented: package spacelift pull_request contains { "commit": run.commit.hash, "body": "", "auto-resolve": false, } if { ... } Where the auto-resolve input defaults to true, so as not to break existing behaviour.

💡 Feature Requests

5 days ago

⬆️ Gathering votes

Better observability for run times and worker utilization

We run self hosted Spacelift and would like the following observability. Run Execution Metrics End-to-end run duration (histogram): Distribution of total wall-clock time from run creation to terminal state. Enables tracking p50/p90/p99 durations, detecting regressions, and setting SLOs on deployment latency. Should support labels for stack, space, run type, and terminal state. Worker Pool Metrics Runs queued for workers (gauge): Count of runs currently waiting for a worker, queryable over time. Enables alerting on queue saturation and right-sizing worker pools. Should support labels for stack, space, and worker pool. Per-run worker wait time (histogram): Distribution of time each run spends waiting for a worker before execution begins. Should support labels for stack, space, and worker pool.

💡 Feature Requests

20 days ago

2