Conditional enablement of stacks within templates
We would like the ability to conditionally enable or disable stacks defined in a template based on input values. A common use case is selectively deploying optional components, for example via a boolean input such as enable_service_x. When set to false, the corresponding stack should not be created or executed. This becomes particularly important in templates that define multiple related stacks, where some components are optional depending on environment, tenant, or feature flags. Expected behaviour: Stacks can be conditionally included or excluded based on template inputs. Disabled stacks are treated as if they do not exist for that run. Any dependencies referencing a disabled stack are ignored rather than causing errors. The dependency graph is resolved dynamically after conditions are evaluated.
π‘ Feature Requests
about 20 hours ago
Stacks
Conditional enablement of stacks within templates
We would like the ability to conditionally enable or disable stacks defined in a template based on input values. A common use case is selectively deploying optional components, for example via a boolean input such as enable_service_x. When set to false, the corresponding stack should not be created or executed. This becomes particularly important in templates that define multiple related stacks, where some components are optional depending on environment, tenant, or feature flags. Expected behaviour: Stacks can be conditionally included or excluded based on template inputs. Disabled stacks are treated as if they do not exist for that run. Any dependencies referencing a disabled stack are ignored rather than causing errors. The dependency graph is resolved dynamically after conditions are evaluated.
π‘ Feature Requests
about 20 hours ago
Stacks
π Discovery
Support Strict Read-Only Operation Mode on Spacelift/Spacectl MCP
Support read-only operation mode in order to support strict safety/security boundaries around the use of the Spacelift MCP.
π‘ Feature Requests
1 day ago
Access Control
π Discovery
Support Strict Read-Only Operation Mode on Spacelift/Spacectl MCP
Support read-only operation mode in order to support strict safety/security boundaries around the use of the Spacelift MCP.
π‘ Feature Requests
1 day ago
Access Control
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
π‘ Feature Requests
about 1 month ago
Access Control
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
π‘ Feature Requests
about 1 month ago
Access Control
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
π‘ Feature Requests
22 days ago
Integrations
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
π‘ Feature Requests
22 days ago
Integrations
Disable public worker pools at the account/space level
## Requested Solution Add an account-level (and optionally space-level) toggle: **"Disable public worker pools."** When enabled: - Stacks without a `worker_pool_id` cannot trigger runs; they fail immediately with a clear error message before reaching any worker - The Spacelift UI hides or disables the "use public workers" option when creating/editing stacks - API and Terraform provider calls that create or update a stack without a `worker_pool_id` are rejected This should be inheritable: setting it at a parent space cascades to all children, matching the existing space-based RBAC model. --- ## Use Case Our organization requires all Terraform execution to occur on internally-managed private worker pools for security and compliance. Today, this requires writing and maintaining an OPA/Rego PLAN policy, attaching it to the correct space, and accepting that the policy only fires after `terraform plan` has already executed on the public runner. A misconfigured or newly-created stack silently defaults to public runners with no preventive guardrail. A simple toggle would eliminate the need for policy-based workarounds entirely. --- ## Priority High. This is a blocker for enterprise customers with private infrastructure mandates.
π‘ Feature Requests
14 days ago
Access Control
Disable public worker pools at the account/space level
## Requested Solution Add an account-level (and optionally space-level) toggle: **"Disable public worker pools."** When enabled: - Stacks without a `worker_pool_id` cannot trigger runs; they fail immediately with a clear error message before reaching any worker - The Spacelift UI hides or disables the "use public workers" option when creating/editing stacks - API and Terraform provider calls that create or update a stack without a `worker_pool_id` are rejected This should be inheritable: setting it at a parent space cascades to all children, matching the existing space-based RBAC model. --- ## Use Case Our organization requires all Terraform execution to occur on internally-managed private worker pools for security and compliance. Today, this requires writing and maintaining an OPA/Rego PLAN policy, attaching it to the correct space, and accepting that the policy only fires after `terraform plan` has already executed on the public runner. A misconfigured or newly-created stack silently defaults to public runners with no preventive guardrail. A simple toggle would eliminate the need for policy-based workarounds entirely. --- ## Priority High. This is a blocker for enterprise customers with private infrastructure mandates.
π‘ Feature Requests
14 days ago
Access Control
Native Git support
Add Support for using a native Git binary to clone repositories in Spacelift. This would be really useful for teams that want more control over the checkout process, with options for partial cloning and mirror support as well.
π‘ Feature Requests
6 days ago
VCS
Native Git support
Add Support for using a native Git binary to clone repositories in Spacelift. This would be really useful for teams that want more control over the checkout process, with options for partial cloning and mirror support as well.
π‘ Feature Requests
6 days ago
VCS
Allow read-only AWS integrations to be autoattached
We have separate AWS IAM roles for Spacelift integration attachments, so we can allow preview runs to run on unapproved code changes, without worrying about someone being able to make a change to the underlying AWS resources from their GitHub branch. This is working well, except autoattach: no longer works to attach these integrations, because it defaults to allowing writes. So we have to explicitly create every integration attachment for every stack. Is it possible to allow autoattach_read: (and autoattach_write: for symmetry) labels on AWS integrations to specify what the integration should be used for on the stacks that match?
π‘ Feature Requests
16 days ago
Integrations
Allow read-only AWS integrations to be autoattached
We have separate AWS IAM roles for Spacelift integration attachments, so we can allow preview runs to run on unapproved code changes, without worrying about someone being able to make a change to the underlying AWS resources from their GitHub branch. This is working well, except autoattach: no longer works to attach these integrations, because it defaults to allowing writes. So we have to explicitly create every integration attachment for every stack. Is it possible to allow autoattach_read: (and autoattach_write: for symmetry) labels on AWS integrations to specify what the integration should be used for on the stacks that match?
π‘ Feature Requests
16 days ago
Integrations
Sync PR Title Updates to Spacelift Runs
When a pull request title is edited in GitHub after a run has been created, the updated title should be reflected in Spacelift. Current Behavior PR details (including the title) are captured at the time the run is created. Any subsequent edits to the PR title in GitHub are not propagated back to Spacelift - the run continues to display the original title. Desired Behavior Spacelift should listen for PR title update events from GitHub and update the corresponding run metadata accordingly, so the title shown in Spacelift always stays in sync with the current PR title. Why This Matters Teams often rename PRs to reflect scope changes, status (e.g. adding [WIP] or [READY]), or updated descriptions as work progresses. When Spacelift shows a stale title, it creates confusion when navigating runs and correlating them with open PRs.
π‘ Feature Requests
9 days ago
Sync PR Title Updates to Spacelift Runs
When a pull request title is edited in GitHub after a run has been created, the updated title should be reflected in Spacelift. Current Behavior PR details (including the title) are captured at the time the run is created. Any subsequent edits to the PR title in GitHub are not propagated back to Spacelift - the run continues to display the original title. Desired Behavior Spacelift should listen for PR title update events from GitHub and update the corresponding run metadata accordingly, so the title shown in Spacelift always stays in sync with the current PR title. Why This Matters Teams often rename PRs to reflect scope changes, status (e.g. adding [WIP] or [READY]), or updated descriptions as work progresses. When Spacelift shows a stale title, it creates confusion when navigating runs and correlating them with open PRs.
π‘ Feature Requests
9 days ago
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the runβs outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
π‘ Feature Requests
20 days ago
UI/UX
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the runβs outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
π‘ Feature Requests
20 days ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (βplannedβ can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
π Feedback
20 days ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (βplannedβ can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
π Feedback
20 days ago
UI/UX
Cache the Spacelift Provider on Public Workers
Cache the Spacelift provider on public workers so that runs are no longer dependent on GitHub availability at execution time. This would reduce run failures caused by GitHub outages, and would also improve run times by removing a repeated network fetch.
π‘ Feature Requests
about 8 hours ago
Workers
Cache the Spacelift Provider on Public Workers
Cache the Spacelift provider on public workers so that runs are no longer dependent on GitHub availability at execution time. This would reduce run failures caused by GitHub outages, and would also improve run times by removing a repeated network fetch.
π‘ Feature Requests
about 8 hours ago
Workers
Slack Notification Blocks
Currently formatting with Slack messages is limited due to only being able to send through messages with markdown/strings. It does not accept Slack JSON like blocks, attachments, or fields.
π‘ Feature Requests
about 12 hours ago
Slack Notification Blocks
Currently formatting with Slack messages is limited due to only being able to send through messages with markdown/strings. It does not accept Slack JSON like blocks, attachments, or fields.
π‘ Feature Requests
about 12 hours ago
Slack Proposal Promotion
I like the fact that we are able to confirm/discard the tracked run from Slack. Is there a way to have this same functionality for promoting proposed runs?
π‘ Feature Requests
about 12 hours ago
Slack Proposal Promotion
I like the fact that we are able to confirm/discard the tracked run from Slack. Is there a way to have this same functionality for promoting proposed runs?
π‘ Feature Requests
about 12 hours ago
TTL on Intent Projects with automatic resource cleanup
Expose a ttl (or expires_at) property on Intent Projects with the following behavior: Configuration β TTL can be set at project creation or updated later, expressed either as a duration (72h, 7d, 30d) or an absolute timestamp. It should be settable via UI, API, GraphQL, and the Terraform provider. Default & policy control β Allow administrators to set a default TTL per space, and/or enforce a maximum TTL via an Intent policy (e.g., "all Intent Projects in the dev space must have a TTL β€ 14 days"). Notifications β Send notifications (email/Slack via existing notification policies) to the project owner at configurable thresholds before expiration (e.g., 72h, 24h, 1h) with a one-click "extend TTL" action. Expiration behavior β When the TTL elapses, Spacelift automatically performs a destroy of all resources managed by the project in dependency order, subject to existing Intent policies (so a policy denial on delete is still respected and surfaced). The full cleanup run is recorded on the project's History tab for auditability, matching how deletions are already tracked. Project lifecycle after cleanup β Configurable: either archive the project (keeping history/receipts for audit) or fully delete it. Archive should be the default. Lock & grace handling β If a project is actively locked by a user session when TTL expires, delay cleanup until the lock releases (or a configurable grace period elapses), and notify the lock holder.
π‘ Feature Requests
15 days ago
TTL on Intent Projects with automatic resource cleanup
Expose a ttl (or expires_at) property on Intent Projects with the following behavior: Configuration β TTL can be set at project creation or updated later, expressed either as a duration (72h, 7d, 30d) or an absolute timestamp. It should be settable via UI, API, GraphQL, and the Terraform provider. Default & policy control β Allow administrators to set a default TTL per space, and/or enforce a maximum TTL via an Intent policy (e.g., "all Intent Projects in the dev space must have a TTL β€ 14 days"). Notifications β Send notifications (email/Slack via existing notification policies) to the project owner at configurable thresholds before expiration (e.g., 72h, 24h, 1h) with a one-click "extend TTL" action. Expiration behavior β When the TTL elapses, Spacelift automatically performs a destroy of all resources managed by the project in dependency order, subject to existing Intent policies (so a policy denial on delete is still respected and surfaced). The full cleanup run is recorded on the project's History tab for auditability, matching how deletions are already tracked. Project lifecycle after cleanup β Configurable: either archive the project (keeping history/receipts for audit) or fully delete it. Archive should be the default. Lock & grace handling β If a project is actively locked by a user session when TTL expires, delay cleanup until the lock releases (or a configurable grace period elapses), and notify the lock holder.
π‘ Feature Requests
15 days ago
Auto-attachment of policies to Intent Projects via labels
Allow Intent policies (and ideally also contexts and integrations on Intent Projects) to use the existing autoattach: label convention. When an Intent Project is created or updated with a matching label, any policy carrying autoattach: should be automatically attached. Concretely: An Intent policy labeled autoattach:intent-baseline should auto-attach to every Intent Project labeled intent-baseline. The wildcard form autoattach:* should attach a policy to all Intent Projects within the policy's space (and child spaces), matching the behavior described in the Context docs. Auto-attached policies should be visible in the Intent Project's Policies tab, clearly marked as auto-attached (same UX as stacks). Support should extend to the Terraform provider and GraphQL API so that Intent Projects can be governed declaratively from an administrative stack.
π‘ Feature Requests
15 days ago
Auto-attachment of policies to Intent Projects via labels
Allow Intent policies (and ideally also contexts and integrations on Intent Projects) to use the existing autoattach: label convention. When an Intent Project is created or updated with a matching label, any policy carrying autoattach: should be automatically attached. Concretely: An Intent policy labeled autoattach:intent-baseline should auto-attach to every Intent Project labeled intent-baseline. The wildcard form autoattach:* should attach a policy to all Intent Projects within the policy's space (and child spaces), matching the behavior described in the Context docs. Auto-attached policies should be visible in the Intent Project's Policies tab, clearly marked as auto-attached (same UX as stacks). Support should extend to the Terraform provider and GraphQL API so that Intent Projects can be governed declaratively from an administrative stack.
π‘ Feature Requests
15 days ago
Orphaned PENDING_REVIEW runs from closed PRs block VCS settings changes with no UI path to resolve
Context: We have stacks configured to require approval on PROPOSED runs (used as a GitHub status check to block PR merge until someone reviews the plan in Spacelift). When we tried to update the Repository config on those stacks, it failed with all runs need to finish before changing the VCS settings. We discarded/rejected everything visible in the PR view, but the error persisted. Using spacectl against the GraphQL API, we found hundreds of runs stuck in PENDING_REVIEW from PRs that had since closed. After bulk-discarding those via the API, the settings change went through. Repro: Stack with an approval policy requiring approval on PROPOSED runs Open a PR β PROPOSED run created in PENDING_REVIEW Close the PR without ever approving/rejecting the run Run stays in PENDING_REVIEW indefinitely, invisible in the stack's PR view Try to change the Repository setting β blocked by all runs need to finish before changing the VCS settings, with no way in the UI to find the offending runs The FR (two possible angles, not mutually exclusive): Surface PENDING_REVIEW runs in the stack UI, especially ones orphaned by closed PRs. Today they're invisible once the PR is gone. List the blocking runs in the error message when a VCS settings change is rejected, so users can act on it without dropping to the API. Behavior is technically correct (PENDING_REVIEW is legitimately "not finished"), but the visibility gap makes it undiagnosable from the console. We're unblocked for now β logging this for awareness.
π‘ Feature Requests
1 day ago
Orphaned PENDING_REVIEW runs from closed PRs block VCS settings changes with no UI path to resolve
Context: We have stacks configured to require approval on PROPOSED runs (used as a GitHub status check to block PR merge until someone reviews the plan in Spacelift). When we tried to update the Repository config on those stacks, it failed with all runs need to finish before changing the VCS settings. We discarded/rejected everything visible in the PR view, but the error persisted. Using spacectl against the GraphQL API, we found hundreds of runs stuck in PENDING_REVIEW from PRs that had since closed. After bulk-discarding those via the API, the settings change went through. Repro: Stack with an approval policy requiring approval on PROPOSED runs Open a PR β PROPOSED run created in PENDING_REVIEW Close the PR without ever approving/rejecting the run Run stays in PENDING_REVIEW indefinitely, invisible in the stack's PR view Try to change the Repository setting β blocked by all runs need to finish before changing the VCS settings, with no way in the UI to find the offending runs The FR (two possible angles, not mutually exclusive): Surface PENDING_REVIEW runs in the stack UI, especially ones orphaned by closed PRs. Today they're invisible once the PR is gone. List the blocking runs in the error message when a VCS settings change is rejected, so users can act on it without dropping to the API. Behavior is technically correct (PENDING_REVIEW is legitimately "not finished"), but the visibility gap makes it undiagnosable from the console. We're unblocked for now β logging this for awareness.
π‘ Feature Requests
1 day ago
PR comment in conversation and Github checks formatting.
Strongly distinguish between resources that are created, deleted, and modified. The deletion notification is fine as-is (Spacelift seems to display a β for each resource), but a few improvements would help: Use distinct colors for creations, deletions, and modifications (e.g., green, red, orange). Group and emphasize deleted resources given their importance. Group and emphasize delete-then-create resources separately. These are dangerous and easily overlooked. In our manual GitHub setup, we colored these red rather than orange (like other updates) specifically because of their impact.
π‘ Feature Requests
3 days ago
Policies
PR comment in conversation and Github checks formatting.
Strongly distinguish between resources that are created, deleted, and modified. The deletion notification is fine as-is (Spacelift seems to display a β for each resource), but a few improvements would help: Use distinct colors for creations, deletions, and modifications (e.g., green, red, orange). Group and emphasize deleted resources given their importance. Group and emphasize delete-then-create resources separately. These are dangerous and easily overlooked. In our manual GitHub setup, we colored these red rather than orange (like other updates) specifically because of their impact.
π‘ Feature Requests
3 days ago
Policies
Concurrency Limits Per Context / Provider
Currently, Spacelift does not support limiting the number of concurrent runs that share a specific context or hit a specific external provider/API. The only way to throttle this today is by isolating stacks into dedicated worker pools of size 1.
π‘ Feature Requests
17 days ago
Concurrency Limits Per Context / Provider
Currently, Spacelift does not support limiting the number of concurrent runs that share a specific context or hit a specific external provider/API. The only way to throttle this today is by isolating stacks into dedicated worker pools of size 1.
π‘ Feature Requests
17 days ago
Support private Docker runner images on the public worker pool
Allow stacks using the public (shared) worker pool to pull runner images from private container registries (e.g. ECR, Docker Hub private, GCR), rather than requiring images to be publicly accessible. Currently, customers on the public worker pool must publish their runner images to a public registry.Β With how docker images currently work, this is intentional as the Docker daemon is shared across tenants, so cached image layers could be accessible to other customers. Private worker pools don't have this constraint since the user owns the infrastructure. ( https://docs.spacelift.io/integrations/docker.html#using-private-docker-images) But in the userβs situation they would rather accept the risk of their image being leaked via the Docker daemon to other Spacelift customers rather than having to publish the image to a public registry.
π‘ Feature Requests
6 days ago
Support private Docker runner images on the public worker pool
Allow stacks using the public (shared) worker pool to pull runner images from private container registries (e.g. ECR, Docker Hub private, GCR), rather than requiring images to be publicly accessible. Currently, customers on the public worker pool must publish their runner images to a public registry.Β With how docker images currently work, this is intentional as the Docker daemon is shared across tenants, so cached image layers could be accessible to other customers. Private worker pools don't have this constraint since the user owns the infrastructure. ( https://docs.spacelift.io/integrations/docker.html#using-private-docker-images) But in the userβs situation they would rather accept the risk of their image being leaked via the Docker daemon to other Spacelift customers rather than having to publish the image to a public registry.
π‘ Feature Requests
6 days ago