π Discovery
Support Strict Read-Only Operation Mode on Spacelift/Spacectl MCP
Support read-only operation mode in order to support strict safety/security boundaries around the use of the Spacelift MCP.
π‘ Feature Requests
7 days ago
Access Control
π Discovery
Support Strict Read-Only Operation Mode on Spacelift/Spacectl MCP
Support read-only operation mode in order to support strict safety/security boundaries around the use of the Spacelift MCP.
π‘ Feature Requests
7 days ago
Access Control
Conditional enablement of stacks within templates
We would like the ability to conditionally enable or disable stacks defined in a template based on input values. A common use case is selectively deploying optional components, for example via a boolean input such as enable_service_x. When set to false, the corresponding stack should not be created or executed. This becomes particularly important in templates that define multiple related stacks, where some components are optional depending on environment, tenant, or feature flags. Expected behaviour: Stacks can be conditionally included or excluded based on template inputs. Disabled stacks are treated as if they do not exist for that run. Any dependencies referencing a disabled stack are ignored rather than causing errors. The dependency graph is resolved dynamically after conditions are evaluated.
π‘ Feature Requests
7 days ago
Stacks
Conditional enablement of stacks within templates
We would like the ability to conditionally enable or disable stacks defined in a template based on input values. A common use case is selectively deploying optional components, for example via a boolean input such as enable_service_x. When set to false, the corresponding stack should not be created or executed. This becomes particularly important in templates that define multiple related stacks, where some components are optional depending on environment, tenant, or feature flags. Expected behaviour: Stacks can be conditionally included or excluded based on template inputs. Disabled stacks are treated as if they do not exist for that run. Any dependencies referencing a disabled stack are ignored rather than causing errors. The dependency graph is resolved dynamically after conditions are evaluated.
π‘ Feature Requests
7 days ago
Stacks
β¬οΈ Gathering votes
Codeberg Intergration
In order to support sovereign European code repositories without losing some of the quality of life features that GitHub and Gitlab provide via the integration, we would like a fully fledged Codeberg integration with Spacelift, or the option to write our own.
π‘ Feature Requests
5 days ago
VCS
β¬οΈ Gathering votes
Codeberg Intergration
In order to support sovereign European code repositories without losing some of the quality of life features that GitHub and Gitlab provide via the integration, we would like a fully fledged Codeberg integration with Spacelift, or the option to write our own.
π‘ Feature Requests
5 days ago
VCS
Worker Pool Assignment Based on Run Type (PROPOSED vs TRACKED)
Requested Solution Add support for routing runs to different worker pools based on run type. The most common use case is: PROPOSED (PR previews) β public worker pool TRACKED (main branch deploys) β private worker pool This could be implemented as a new policy type (e.g. WORKER_POOL) or as a per-stack configuration with two fields: worker_pool_proposed and worker_pool_tracked. Use Case Organizations on plans with a limited number of private workers want to use them efficiently. Private workers are ideal for tracked runs β they cache Docker layers and run on faster hardware. PR previews (proposed runs), however, are frequent and short-lived, making the public fleet a better fit for them. Today, worker pool assignment is stack-level only. Setting a private pool on a stack routes all runs β both proposed and tracked β to that pool, consuming the private worker even for PR previews. This forces a choice: either waste private worker capacity on previews, or don't use the private pool at all.
π‘ Feature Requests
about 15 hours ago
Workers
Worker Pool Assignment Based on Run Type (PROPOSED vs TRACKED)
Requested Solution Add support for routing runs to different worker pools based on run type. The most common use case is: PROPOSED (PR previews) β public worker pool TRACKED (main branch deploys) β private worker pool This could be implemented as a new policy type (e.g. WORKER_POOL) or as a per-stack configuration with two fields: worker_pool_proposed and worker_pool_tracked. Use Case Organizations on plans with a limited number of private workers want to use them efficiently. Private workers are ideal for tracked runs β they cache Docker layers and run on faster hardware. PR previews (proposed runs), however, are frequent and short-lived, making the public fleet a better fit for them. Today, worker pool assignment is stack-level only. Setting a private pool on a stack routes all runs β both proposed and tracked β to that pool, consuming the private worker even for PR previews. This forces a choice: either waste private worker capacity on previews, or don't use the private pool at all.
π‘ Feature Requests
about 15 hours ago
Workers
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
π‘ Feature Requests
about 1 month ago
Access Control
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
π‘ Feature Requests
about 1 month ago
Access Control
Support private OIDC JWKS endpoint routing for OIDC API Key validation
Currently, Spacelift's OIDC API Key feature requires the OIDC provider JWKS endpoint to be publicly reachable (or reachable from Spacelift's egress IPs), because token validation is performed server-side by the Spacelift control plane. This blocks adoption when operating in fully private or air-gapped environments where exposing the JWKS endpoint externally is not permitted by security policy. Requested behaviour: Provide a mechanism to route OIDC JWKS validation through a private worker pool or VCS agent, so users can use OIDC API Keys without requiring a publicly accessible OIDC endpoint.
π‘ Feature Requests
6 days ago
Support private OIDC JWKS endpoint routing for OIDC API Key validation
Currently, Spacelift's OIDC API Key feature requires the OIDC provider JWKS endpoint to be publicly reachable (or reachable from Spacelift's egress IPs), because token validation is performed server-side by the Spacelift control plane. This blocks adoption when operating in fully private or air-gapped environments where exposing the JWKS endpoint externally is not permitted by security policy. Requested behaviour: Provide a mechanism to route OIDC JWKS validation through a private worker pool or VCS agent, so users can use OIDC API Keys without requiring a publicly accessible OIDC endpoint.
π‘ Feature Requests
6 days ago
β¬οΈ Gathering votes
Disable Infracost for Drift Detection runs
We use infracost by attaching the infracost to enable spaceliftβs integration. As per the docs we also attach a auto-atatch a context with our api key as an env var with the infracost label. We would like to be able to disable the infracost integration when a run is triggered via drift detection.
π‘ Feature Requests
5 days ago
Integrations
β¬οΈ Gathering votes
Disable Infracost for Drift Detection runs
We use infracost by attaching the infracost to enable spaceliftβs integration. As per the docs we also attach a auto-atatch a context with our api key as an env var with the infracost label. We would like to be able to disable the infracost integration when a run is triggered via drift detection.
π‘ Feature Requests
5 days ago
Integrations
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
π‘ Feature Requests
28 days ago
Integrations
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
π‘ Feature Requests
28 days ago
Integrations
Disable public worker pools at the account/space level
## Requested Solution Add an account-level (and optionally space-level) toggle: **"Disable public worker pools."** When enabled: - Stacks without a `worker_pool_id` cannot trigger runs; they fail immediately with a clear error message before reaching any worker - The Spacelift UI hides or disables the "use public workers" option when creating/editing stacks - API and Terraform provider calls that create or update a stack without a `worker_pool_id` are rejected This should be inheritable: setting it at a parent space cascades to all children, matching the existing space-based RBAC model. --- ## Use Case Our organization requires all Terraform execution to occur on internally-managed private worker pools for security and compliance. Today, this requires writing and maintaining an OPA/Rego PLAN policy, attaching it to the correct space, and accepting that the policy only fires after `terraform plan` has already executed on the public runner. A misconfigured or newly-created stack silently defaults to public runners with no preventive guardrail. A simple toggle would eliminate the need for policy-based workarounds entirely. --- ## Priority High. This is a blocker for enterprise customers with private infrastructure mandates.
π‘ Feature Requests
20 days ago
Access Control
Disable public worker pools at the account/space level
## Requested Solution Add an account-level (and optionally space-level) toggle: **"Disable public worker pools."** When enabled: - Stacks without a `worker_pool_id` cannot trigger runs; they fail immediately with a clear error message before reaching any worker - The Spacelift UI hides or disables the "use public workers" option when creating/editing stacks - API and Terraform provider calls that create or update a stack without a `worker_pool_id` are rejected This should be inheritable: setting it at a parent space cascades to all children, matching the existing space-based RBAC model. --- ## Use Case Our organization requires all Terraform execution to occur on internally-managed private worker pools for security and compliance. Today, this requires writing and maintaining an OPA/Rego PLAN policy, attaching it to the correct space, and accepting that the policy only fires after `terraform plan` has already executed on the public runner. A misconfigured or newly-created stack silently defaults to public runners with no preventive guardrail. A simple toggle would eliminate the need for policy-based workarounds entirely. --- ## Priority High. This is a blocker for enterprise customers with private infrastructure mandates.
π‘ Feature Requests
20 days ago
Access Control
Native Git support
Add Support for using a native Git binary to clone repositories in Spacelift. This would be really useful for teams that want more control over the checkout process, with options for partial cloning and mirror support as well.
π‘ Feature Requests
12 days ago
VCS
Native Git support
Add Support for using a native Git binary to clone repositories in Spacelift. This would be really useful for teams that want more control over the checkout process, with options for partial cloning and mirror support as well.
π‘ Feature Requests
12 days ago
VCS
Allow creating a custom role that can manage Spacelift policies
We would like to be able to create a custom role that is able to create/update/delete policies. Currently the only way to grant stacks or users permissions to manage our Spacelift policies is through the default admin role.
π‘ Feature Requests
13 days ago
Access Control
Allow creating a custom role that can manage Spacelift policies
We would like to be able to create a custom role that is able to create/update/delete policies. Currently the only way to grant stacks or users permissions to manage our Spacelift policies is through the default admin role.
π‘ Feature Requests
13 days ago
Access Control
Allow read-only AWS integrations to be autoattached
We have separate AWS IAM roles for Spacelift integration attachments, so we can allow preview runs to run on unapproved code changes, without worrying about someone being able to make a change to the underlying AWS resources from their GitHub branch. This is working well, except autoattach: no longer works to attach these integrations, because it defaults to allowing writes. So we have to explicitly create every integration attachment for every stack. Is it possible to allow autoattach_read: (and autoattach_write: for symmetry) labels on AWS integrations to specify what the integration should be used for on the stacks that match?
π‘ Feature Requests
22 days ago
Integrations
Allow read-only AWS integrations to be autoattached
We have separate AWS IAM roles for Spacelift integration attachments, so we can allow preview runs to run on unapproved code changes, without worrying about someone being able to make a change to the underlying AWS resources from their GitHub branch. This is working well, except autoattach: no longer works to attach these integrations, because it defaults to allowing writes. So we have to explicitly create every integration attachment for every stack. Is it possible to allow autoattach_read: (and autoattach_write: for symmetry) labels on AWS integrations to specify what the integration should be used for on the stacks that match?
π‘ Feature Requests
22 days ago
Integrations
Sync PR Title Updates to Spacelift Runs
When a pull request title is edited in GitHub after a run has been created, the updated title should be reflected in Spacelift. Current Behavior PR details (including the title) are captured at the time the run is created. Any subsequent edits to the PR title in GitHub are not propagated back to Spacelift - the run continues to display the original title. Desired Behavior Spacelift should listen for PR title update events from GitHub and update the corresponding run metadata accordingly, so the title shown in Spacelift always stays in sync with the current PR title. Why This Matters Teams often rename PRs to reflect scope changes, status (e.g. adding [WIP] or [READY]), or updated descriptions as work progresses. When Spacelift shows a stale title, it creates confusion when navigating runs and correlating them with open PRs.
π‘ Feature Requests
15 days ago
Sync PR Title Updates to Spacelift Runs
When a pull request title is edited in GitHub after a run has been created, the updated title should be reflected in Spacelift. Current Behavior PR details (including the title) are captured at the time the run is created. Any subsequent edits to the PR title in GitHub are not propagated back to Spacelift - the run continues to display the original title. Desired Behavior Spacelift should listen for PR title update events from GitHub and update the corresponding run metadata accordingly, so the title shown in Spacelift always stays in sync with the current PR title. Why This Matters Teams often rename PRs to reflect scope changes, status (e.g. adding [WIP] or [READY]), or updated descriptions as work progresses. When Spacelift shows a stale title, it creates confusion when navigating runs and correlating them with open PRs.
π‘ Feature Requests
15 days ago
Allow Policies Attached to Modules to Run on Stacks that Use Them
We would like policies, such as plan or approval, that are attached to modules to have the option to be run/evaluated on stacks that use them. You can attach policies to stacks via labels but our internal users can specify their own labels and using autoattach:* feels clunky here. We are looking to gate specific modules with security or devops approval specifically without interfering with the end user flow much at all.
π‘ Feature Requests
about 18 hours ago
Policies
Allow Policies Attached to Modules to Run on Stacks that Use Them
We would like policies, such as plan or approval, that are attached to modules to have the option to be run/evaluated on stacks that use them. You can attach policies to stacks via labels but our internal users can specify their own labels and using autoattach:* feels clunky here. We are looking to gate specific modules with security or devops approval specifically without interfering with the end user flow much at all.
π‘ Feature Requests
about 18 hours ago
Policies
Auto-deploy latest proposed plan
Weβd love it if there was a halfway between autodeploy being on or off, where tracked runs would autodeploy only if the latest proposed plan for that stack had an identical plan to the tracked run plan. Our workflow for terraform changes is fairly standard. People make changes, open an MR in Gitlab, and then solicit review. Spacelift comments on the MR with a proposed plan for each stack affected by the change. At the point of approval, we consider both the code change and the proposed plans to have been βapprovedβ - and would be happy for the proposed plan to be automatically applied. However this currently isnβt possible. We only have the choice between no autodeploy (at which point there is another gate in the tracked run) or autodeploy on (at which point if the environment has drifted for any reason we may apply inadvertant changes that did not show in the proposed plan on the MR) Ideally weβd be able to select an option that would: Have proposed plans work identically to how they currently work When an MR is approved and merge, the tracked run it triggers can reference the previous proopsed run If the changes in that proposed run match the changes in the tracked run, autodeploy, otherwise hit the standard manual approval gate when autodeploy is off In terms of implementation, I think it could work a few different ways: Have proposed plans save their plans as an artifact of the run, similarly to how tracked runs do when planning. At the point the tracked run is created if this option is turned on rather than running a plan phase go straight to apply with the latest propsed plan artifact. If this applies successfully then weβre done. If it doesnβt, fall back to running a βnormalβ tracked run, going into the plan phase and then sitting at the manul approval gate Have proposed plans save some kind of hash of the plan that captures the changes it would intend to make. At the tracked run stage grab the hash of the last proposed run, then generate a new plan like you would normally in a tracked run, also taking the hash. Compare the two hashes and if they match automatically approve the run. If not go to the manual approval gate A variation of the above, just provide a tracked run with access to the last proposed run. This way users can replicate this behaviour with an approval policy, essentially implementing the plan comparison step above in the policy itself
π‘ Feature Requests
about 20 hours ago
IaC Workflows
Auto-deploy latest proposed plan
Weβd love it if there was a halfway between autodeploy being on or off, where tracked runs would autodeploy only if the latest proposed plan for that stack had an identical plan to the tracked run plan. Our workflow for terraform changes is fairly standard. People make changes, open an MR in Gitlab, and then solicit review. Spacelift comments on the MR with a proposed plan for each stack affected by the change. At the point of approval, we consider both the code change and the proposed plans to have been βapprovedβ - and would be happy for the proposed plan to be automatically applied. However this currently isnβt possible. We only have the choice between no autodeploy (at which point there is another gate in the tracked run) or autodeploy on (at which point if the environment has drifted for any reason we may apply inadvertant changes that did not show in the proposed plan on the MR) Ideally weβd be able to select an option that would: Have proposed plans work identically to how they currently work When an MR is approved and merge, the tracked run it triggers can reference the previous proopsed run If the changes in that proposed run match the changes in the tracked run, autodeploy, otherwise hit the standard manual approval gate when autodeploy is off In terms of implementation, I think it could work a few different ways: Have proposed plans save their plans as an artifact of the run, similarly to how tracked runs do when planning. At the point the tracked run is created if this option is turned on rather than running a plan phase go straight to apply with the latest propsed plan artifact. If this applies successfully then weβre done. If it doesnβt, fall back to running a βnormalβ tracked run, going into the plan phase and then sitting at the manul approval gate Have proposed plans save some kind of hash of the plan that captures the changes it would intend to make. At the tracked run stage grab the hash of the last proposed run, then generate a new plan like you would normally in a tracked run, also taking the hash. Compare the two hashes and if they match automatically approve the run. If not go to the manual approval gate A variation of the above, just provide a tracked run with access to the last proposed run. This way users can replicate this behaviour with an approval policy, essentially implementing the plan comparison step above in the policy itself
π‘ Feature Requests
about 20 hours ago
IaC Workflows
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the runβs outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
π‘ Feature Requests
26 days ago
UI/UX
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the runβs outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
π‘ Feature Requests
26 days ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (βplannedβ can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
π Feedback
26 days ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (βplannedβ can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
π Feedback
26 days ago
UI/UX
Add links to the `Context` section of the stack page
The context section of each stack is great, shows what contexts are applied. Whats not great is it process required to edit them. If i want to edit once of these context, the process is - jump to the context tab of my stack - get the name of the context - jump to the list of contexts in spacelift - search for the name from previous - finally get to edit the context If we add links to each of the contexts on the stack page, it skips the search portion and lets me jump to editing directly.
π‘ Feature Requests
6 days ago
UI/UX
Add links to the `Context` section of the stack page
The context section of each stack is great, shows what contexts are applied. Whats not great is it process required to edit them. If i want to edit once of these context, the process is - jump to the context tab of my stack - get the name of the context - jump to the list of contexts in spacelift - search for the name from previous - finally get to edit the context If we add links to each of the contexts on the stack page, it skips the search portion and lets me jump to editing directly.
π‘ Feature Requests
6 days ago
UI/UX
Cache the Spacelift Provider on Public Workers
Cache the Spacelift provider on public workers so that runs are no longer dependent on GitHub availability at execution time. This would reduce run failures caused by GitHub outages, and would also improve run times by removing a repeated network fetch.
π‘ Feature Requests
6 days ago
Workers
Cache the Spacelift Provider on Public Workers
Cache the Spacelift provider on public workers so that runs are no longer dependent on GitHub availability at execution time. This would reduce run failures caused by GitHub outages, and would also improve run times by removing a repeated network fetch.
π‘ Feature Requests
6 days ago
Workers