BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
💡 Feature Requests
3 days ago
Integrations
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
💡 Feature Requests
3 days ago
Integrations
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
💡 Feature Requests
17 days ago
Access Control
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
💡 Feature Requests
17 days ago
Access Control
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the run’s outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isn’t available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
💡 Feature Requests
1 day ago
UI/UX
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the run’s outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isn’t available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
💡 Feature Requests
1 day ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (“planned” can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isn’t available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
📝 Feedback
1 day ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (“planned” can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isn’t available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
📝 Feedback
1 day ago
UI/UX
⬆️ Gathering votes
Add a visual batch in the stack UI for forced-apply.
When creating a run, now we have the option to force apply that single stack or in cascade. The issue surfaces when the run is queued or ready, because we cannot know that that option was enabled. It is only shown after the plan phase, because it shows the apply phase with the force applied. I think it would be very useful and interesting to have the ability to see that somewhere in the stack of runs and also in each individual run. Together with this, I think that should be added to the GraphQL API for when listing the runs for the workers queue, and add the filter in the workers queue UI.
💡 Feature Requests
3 days ago
UI/UX
⬆️ Gathering votes
Add a visual batch in the stack UI for forced-apply.
When creating a run, now we have the option to force apply that single stack or in cascade. The issue surfaces when the run is queued or ready, because we cannot know that that option was enabled. It is only shown after the plan phase, because it shows the apply phase with the force applied. I think it would be very useful and interesting to have the ability to see that somewhere in the stack of runs and also in each individual run. Together with this, I think that should be added to the GraphQL API for when listing the runs for the workers queue, and add the filter in the workers queue UI.
💡 Feature Requests
3 days ago
UI/UX
Sandbox Environment for Policy Testing & Other Functionality
As an example, when iterating on login policies, every change invalidates all active sessions for the affected account. This creates a feedback loop: authors must repeatedly log back in (and disrupt other users' sessions) each time they test a policy modification, even for minor fixes like correcting a role name. The existing policy simulator helps validate syntax and basic logic, but it does not catch real-world authorization issues. For example, a policy may reference a role name like writer instead of the correct space-writer. The simulator evaluates the policy as valid, but the role doesn't actually grant the intended permissions in practice. These kinds of bugs are only discoverable through live testing — which currently means deploying to production and invalidating sessions.
💡 Feature Requests
8 days ago
Resources
Sandbox Environment for Policy Testing & Other Functionality
As an example, when iterating on login policies, every change invalidates all active sessions for the affected account. This creates a feedback loop: authors must repeatedly log back in (and disrupt other users' sessions) each time they test a policy modification, even for minor fixes like correcting a role name. The existing policy simulator helps validate syntax and basic logic, but it does not catch real-world authorization issues. For example, a policy may reference a role name like writer instead of the correct space-writer. The simulator evaluates the policy as valid, but the role doesn't actually grant the intended permissions in practice. These kinds of bugs are only discoverable through live testing — which currently means deploying to production and invalidating sessions.
💡 Feature Requests
8 days ago
Resources
⚙️ In Progress
JWT claims support for OIDC API keys (teams/groups passthrough)
When using OIDC API keys for authentication, JWT teams/groups claims are completely ignored, only the sub claim is processed. Teams must be pre-configured statically when creating the OIDC API key, making it impossible to pass through user group/team information dynamically at runtime. We are building a custom Backstage integration with Spacelift to enable self-service infrastructure provisioning with per-user permission boundaries. The Backstage plugin is not suitable as it uses a single admin API key. We need OIDC API keys to pass through the current user and respect Spacelift login policies, as is currently possible with SAML via input.session.teams. With thousands of team/service combinations, the existing workarounds (static API keys per team or subject-based encoding) are not viable at scale.
💡 Feature Requests
9 days ago
OIDC
⚙️ In Progress
JWT claims support for OIDC API keys (teams/groups passthrough)
When using OIDC API keys for authentication, JWT teams/groups claims are completely ignored, only the sub claim is processed. Teams must be pre-configured statically when creating the OIDC API key, making it impossible to pass through user group/team information dynamically at runtime. We are building a custom Backstage integration with Spacelift to enable self-service infrastructure provisioning with per-user permission boundaries. The Backstage plugin is not suitable as it uses a single admin API key. We need OIDC API keys to pass through the current user and respect Spacelift login policies, as is currently possible with SAML via input.session.teams. With thousands of team/service combinations, the existing workarounds (static API keys per team or subject-based encoding) are not viable at scale.
💡 Feature Requests
9 days ago
OIDC
🔭 Discovery
Default worker pool at space or organization level
It would be helpful to set a private workerpool as the default for all stacks organization-wide.
💡 Feature Requests
about 1 month ago
Workers
🔭 Discovery
Default worker pool at space or organization level
It would be helpful to set a private workerpool as the default for all stacks organization-wide.
💡 Feature Requests
about 1 month ago
Workers
🔭 Discovery
Allow merging multiple stack notifications into common GitHub PR comment
We sometimes get PRs which affect many different Stacks. Because each Stack posts an individual comment with the proposed run status, the PR conversation section can become bloated and basically unusable. Also, we have hit GitHub API rate limits and the comments may have contributed here. We would like there to be an option to merge proposed runs into a single PR comment which would gather all proposed runs for given PR. We keep comment content quite short, so the GitHub comment character limit should not be an issue.
💡 Feature Requests
about 2 months ago
Notifications
🔭 Discovery
Allow merging multiple stack notifications into common GitHub PR comment
We sometimes get PRs which affect many different Stacks. Because each Stack posts an individual comment with the proposed run status, the PR conversation section can become bloated and basically unusable. Also, we have hit GitHub API rate limits and the comments may have contributed here. We would like there to be an option to merge proposed runs into a single PR comment which would gather all proposed runs for given PR. We keep comment content quite short, so the GitHub comment character limit should not be an issue.
💡 Feature Requests
about 2 months ago
Notifications
⚙️ In Progress
MCP tool for searching for runs
It looks like the current tools in the MCP server only allow either getting a specific run using a stack id and run id, or listing the runs in a stack. For some use cases it would be more efficient to have a tool that allows searching or filtering the runs. In particular so that an agent can find the proposed runs for a pull request based on the commit id
💡 Feature Requests
16 days ago
Integrations
⚙️ In Progress
MCP tool for searching for runs
It looks like the current tools in the MCP server only allow either getting a specific run using a stack id and run id, or listing the runs in a stack. For some use cases it would be more efficient to have a tool that allows searching or filtering the runs. In particular so that an agent can find the proposed runs for a pull request based on the commit id
💡 Feature Requests
16 days ago
Integrations
Module Registry: Display .tf source files in the Examples tab
When a Terraform module includes an examples/ folder, the Spacelift Module Registry only shows a README (if one exists) under the Examples tab. There is no display of the actual.tf files (e.g. main.tf) that contain the example configuration. This mirrors a known limitation in the public Terraform Registry. The Examples tab in the Spacelift Module Registry renders only the README for a given example subfolder. If no README exists, nothing is displayed at all. The actual.tf files are not surfaced anywhere in the UI. The.tf files within an examples/ / subfolder should be displayed directly in the UI — either alongside or as a fallback to the README — so users can understand how to use the module without navigating to the source code repository. Looking to render.tf files from the examples/ / directory in a syntax-highlighted code view within the Examples tab, either as additional tabs per file or as a collapsible file tree below the README.
💡 Feature Requests
3 days ago
UI/UX
Module Registry: Display .tf source files in the Examples tab
When a Terraform module includes an examples/ folder, the Spacelift Module Registry only shows a README (if one exists) under the Examples tab. There is no display of the actual.tf files (e.g. main.tf) that contain the example configuration. This mirrors a known limitation in the public Terraform Registry. The Examples tab in the Spacelift Module Registry renders only the README for a given example subfolder. If no README exists, nothing is displayed at all. The actual.tf files are not surfaced anywhere in the UI. The.tf files within an examples/ / subfolder should be displayed directly in the UI — either alongside or as a fallback to the README — so users can understand how to use the module without navigating to the source code repository. Looking to render.tf files from the examples/ / directory in a syntax-highlighted code view within the Examples tab, either as additional tabs per file or as a collapsible file tree below the README.
💡 Feature Requests
3 days ago
UI/UX
Queue visibility in the UI
The current Pending runs count is misleading because it lumps together high-priority and low-priority (e.g. drift detection) runs without any distinction. Users can see a large pending number vs. a small worker count and panic. Surface run priority/ordering in the queue UI Show expected wait time or at least a priority-aware count something like: Workers: 6 Busy: 6 Pending: 7 (+ 183 low-priority tasks) as a quick win, with a long term goal of an AI-estimated wait time in minutes.
💡 Feature Requests
5 days ago
Queue visibility in the UI
The current Pending runs count is misleading because it lumps together high-priority and low-priority (e.g. drift detection) runs without any distinction. Users can see a large pending number vs. a small worker count and panic. Surface run priority/ordering in the queue UI Show expected wait time or at least a priority-aware count something like: Workers: 6 Busy: 6 Pending: 7 (+ 183 low-priority tasks) as a quick win, with a long term goal of an AI-estimated wait time in minutes.
💡 Feature Requests
5 days ago
View 'Used By' tab for system roles like for Custom Roles
you can click on Custom Roles, in which you can view two tabs (Permissions and Used By) For the System roles (default reader, writer, admin) you cannot click on them. It makes sense you cannot configure the permissions but I want to be able to view the “Used By” tab
💡 Feature Requests
5 days ago
Access Control
View 'Used By' tab for system roles like for Custom Roles
you can click on Custom Roles, in which you can view two tabs (Permissions and Used By) For the System roles (default reader, writer, admin) you cannot click on them. It makes sense you cannot configure the permissions but I want to be able to view the “Used By” tab
💡 Feature Requests
5 days ago
Access Control
Allow users to import Terraform state into Spacelift stacks via the CLI, in addition to the existing UI flow.
State import is only available through the Spacelift UI. Users must manually click through each stack one by one to import state files. For customers migrating large numbers of stacks, this is a significant manual bottleneck. Desired behavior A CLI command (e.g. spacectl stack state import) that accepts a state file path and a stack identifier, allowing state import to be scripted and automated as part of a migration pipeline.
💡 Feature Requests
8 days ago
Allow users to import Terraform state into Spacelift stacks via the CLI, in addition to the existing UI flow.
State import is only available through the Spacelift UI. Users must manually click through each stack one by one to import state files. For customers migrating large numbers of stacks, this is a significant manual bottleneck. Desired behavior A CLI command (e.g. spacectl stack state import) that accepts a state file path and a stack identifier, allowing state import to be scripted and automated as part of a migration pipeline.
💡 Feature Requests
8 days ago
🔭 Discovery
Support dynamic VCS repository input in Templates
Templates currently resolve the VCS repository field at publish time, meaning the repository is baked into the template version and can't be provided as a dynamic input. This prevents using Templates for use cases where the same stack configuration needs to be deployed across multiple repositories - e.g. a "Stack Vendor" template that engineers deploy from to onboard their repo.
💡 Feature Requests
about 1 month ago
Stacks
🔭 Discovery
Support dynamic VCS repository input in Templates
Templates currently resolve the VCS repository field at publish time, meaning the repository is baked into the template version and can't be provided as a dynamic input. This prevents using Templates for use cases where the same stack configuration needs to be deployed across multiple repositories - e.g. a "Stack Vendor" template that engineers deploy from to onboard their repo.
💡 Feature Requests
about 1 month ago
Stacks
Source code RBAC permission
Please implement RBAC for source code integrations. We do not allow our user-base to assume ‘space admin’ privileges. We make the most of the new custom roles feature to allow customer teams to create their own stacks. However, we have to manage their source code integrations on their behalf as there are no available RBAC actions that can be assigned to custom roles to create Github / Gitlab / Bitbucket integrations. Could you please implement this? It would vastly simplify the administrative overhead. Many thanks and loving your work.
💡 Feature Requests
10 days ago
Access Control
Source code RBAC permission
Please implement RBAC for source code integrations. We do not allow our user-base to assume ‘space admin’ privileges. We make the most of the new custom roles feature to allow customer teams to create their own stacks. However, we have to manage their source code integrations on their behalf as there are no available RBAC actions that can be assigned to custom roles to create Github / Gitlab / Bitbucket integrations. Could you please implement this? It would vastly simplify the administrative overhead. Many thanks and loving your work.
💡 Feature Requests
10 days ago
Access Control
Add template data sources to Terraform provider
Add data sources for working with templates in the Terraform provider, such as spacelift_templates to list templates and spacelift_template_version_by_name to resolve a version by name (for example 1.0.0).
💡 Feature Requests
12 days ago
IaC Workflows
Add template data sources to Terraform provider
Add data sources for working with templates in the Terraform provider, such as spacelift_templates to list templates and spacelift_template_version_by_name to resolve a version by name (for example 1.0.0).
💡 Feature Requests
12 days ago
IaC Workflows
🔭 Discovery
Auto-release stale Terraform state locks on run failure
When a run fails unexpectedly, stale Terraform state locks are sometimes left behind. Request to have this condition detected and automatically clean up locks that were created as part of a run.
💡 Feature Requests
26 days ago
Stacks
🔭 Discovery
Auto-release stale Terraform state locks on run failure
When a run fails unexpectedly, stale Terraform state locks are sometimes left behind. Request to have this condition detected and automatically clean up locks that were created as part of a run.
💡 Feature Requests
26 days ago
Stacks
Sharing private worker pools across spaces without inheritance
We would like the ability to share private worker pools across spaces without requiring space inheritance to be enabled. Ideally, this would support sharing between sibling spaces, in a similar way to how modules can currently be shared.
💡 Feature Requests
12 days ago
Workers
Sharing private worker pools across spaces without inheritance
We would like the ability to share private worker pools across spaces without requiring space inheritance to be enabled. Ideally, this would support sharing between sibling spaces, in a similar way to how modules can currently be shared.
💡 Feature Requests
12 days ago
Workers