Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
π‘ Feature Requests
about 1 month ago
Access Control
Only space_admin can create stack depencies.
You are unable to create stack dependencies without space admin
π‘ Feature Requests
about 1 month ago
Access Control
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
π‘ Feature Requests
17 days ago
Integrations
BYOM: Allow custom base URL for self-hosted / internal LLM endpoints
Problem The current BYOM configuration accepts an API key for a supported provider (Anthropic, OpenAI, Gemini), but does not allow specifying a custom base URL. Enterprise organizations typically route LLM traffic through an internally-hosted proxy or gateway (e.g., LiteLLM) to enforce security controls, model governance, and cost management. In these environments, using a personal or team API key tied to a commercial provider account is not viable. All traffic must go through an internal endpoint on an approved FQDN. Requested Solution Add a custom base URL field to the Spacelift AI BYOM configuration, alongside the existing API key field. This would allow Spacelift to direct AI requests to any OpenAI-compatible endpoint (e.g., https://llm.internal.example.com/v1) rather than only to a commercial provider's public API. This pattern is standard across OpenAI-compatible clients (OPENAI_BASE_URL, openai.base_url in the Python SDK, etc.) and is how tools like LiteLLM, Azure OpenAI, vLLM, and Ollama are accessed. Use Case Our organization runs a centrally-managed LiteLLM instance serving internally-approved models via an OpenAI-compatible API. We want Spacelift AI features (plan summaries, resource explanations, policy suggestions) to use this internal endpoint, but the current BYOM flow only accepts a provider API key, with no way to redirect the base URL. Priority High. This is a blocker for enterprise customers with internal model governance requirements.
π‘ Feature Requests
17 days ago
Integrations
Disable public worker pools at the account/space level
## Requested Solution Add an account-level (and optionally space-level) toggle: **"Disable public worker pools."** When enabled: - Stacks without a `worker_pool_id` cannot trigger runs; they fail immediately with a clear error message before reaching any worker - The Spacelift UI hides or disables the "use public workers" option when creating/editing stacks - API and Terraform provider calls that create or update a stack without a `worker_pool_id` are rejected This should be inheritable: setting it at a parent space cascades to all children, matching the existing space-based RBAC model. --- ## Use Case Our organization requires all Terraform execution to occur on internally-managed private worker pools for security and compliance. Today, this requires writing and maintaining an OPA/Rego PLAN policy, attaching it to the correct space, and accepting that the policy only fires after `terraform plan` has already executed on the public runner. A misconfigured or newly-created stack silently defaults to public runners with no preventive guardrail. A simple toggle would eliminate the need for policy-based workarounds entirely. --- ## Priority High. This is a blocker for enterprise customers with private infrastructure mandates.
π‘ Feature Requests
9 days ago
Access Control
Disable public worker pools at the account/space level
## Requested Solution Add an account-level (and optionally space-level) toggle: **"Disable public worker pools."** When enabled: - Stacks without a `worker_pool_id` cannot trigger runs; they fail immediately with a clear error message before reaching any worker - The Spacelift UI hides or disables the "use public workers" option when creating/editing stacks - API and Terraform provider calls that create or update a stack without a `worker_pool_id` are rejected This should be inheritable: setting it at a parent space cascades to all children, matching the existing space-based RBAC model. --- ## Use Case Our organization requires all Terraform execution to occur on internally-managed private worker pools for security and compliance. Today, this requires writing and maintaining an OPA/Rego PLAN policy, attaching it to the correct space, and accepting that the policy only fires after `terraform plan` has already executed on the public runner. A misconfigured or newly-created stack silently defaults to public runners with no preventive guardrail. A simple toggle would eliminate the need for policy-based workarounds entirely. --- ## Priority High. This is a blocker for enterprise customers with private infrastructure mandates.
π‘ Feature Requests
9 days ago
Access Control
Native Git support
Add Support for using a native Git binary to clone repositories in Spacelift. This would be really useful for teams that want more control over the checkout process, with options for partial cloning and mirror support as well.
π‘ Feature Requests
about 21 hours ago
Native Git support
Add Support for using a native Git binary to clone repositories in Spacelift. This would be really useful for teams that want more control over the checkout process, with options for partial cloning and mirror support as well.
π‘ Feature Requests
about 21 hours ago
Allow read-only AWS integrations to be autoattached
We have separate AWS IAM roles for Spacelift integration attachments, so we can allow preview runs to run on unapproved code changes, without worrying about someone being able to make a change to the underlying AWS resources from their GitHub branch. This is working well, except autoattach: no longer works to attach these integrations, because it defaults to allowing writes. So we have to explicitly create every integration attachment for every stack. Is it possible to allow autoattach_read: (and autoattach_write: for symmetry) labels on AWS integrations to specify what the integration should be used for on the stacks that match?
π‘ Feature Requests
11 days ago
Integrations
Allow read-only AWS integrations to be autoattached
We have separate AWS IAM roles for Spacelift integration attachments, so we can allow preview runs to run on unapproved code changes, without worrying about someone being able to make a change to the underlying AWS resources from their GitHub branch. This is working well, except autoattach: no longer works to attach these integrations, because it defaults to allowing writes. So we have to explicitly create every integration attachment for every stack. Is it possible to allow autoattach_read: (and autoattach_write: for symmetry) labels on AWS integrations to specify what the integration should be used for on the stacks that match?
π‘ Feature Requests
11 days ago
Integrations
Sync PR Title Updates to Spacelift Runs
When a pull request title is edited in GitHub after a run has been created, the updated title should be reflected in Spacelift. Current Behavior PR details (including the title) are captured at the time the run is created. Any subsequent edits to the PR title in GitHub are not propagated back to Spacelift - the run continues to display the original title. Desired Behavior Spacelift should listen for PR title update events from GitHub and update the corresponding run metadata accordingly, so the title shown in Spacelift always stays in sync with the current PR title. Why This Matters Teams often rename PRs to reflect scope changes, status (e.g. adding [WIP] or [READY]), or updated descriptions as work progresses. When Spacelift shows a stale title, it creates confusion when navigating runs and correlating them with open PRs.
π‘ Feature Requests
4 days ago
Sync PR Title Updates to Spacelift Runs
When a pull request title is edited in GitHub after a run has been created, the updated title should be reflected in Spacelift. Current Behavior PR details (including the title) are captured at the time the run is created. Any subsequent edits to the PR title in GitHub are not propagated back to Spacelift - the run continues to display the original title. Desired Behavior Spacelift should listen for PR title update events from GitHub and update the corresponding run metadata accordingly, so the title shown in Spacelift always stays in sync with the current PR title. Why This Matters Teams often rename PRs to reflect scope changes, status (e.g. adding [WIP] or [READY]), or updated descriptions as work progresses. When Spacelift shows a stale title, it creates confusion when navigating runs and correlating them with open PRs.
π‘ Feature Requests
4 days ago
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the runβs outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
π‘ Feature Requests
15 days ago
UI/UX
Delta and changes counts are misleading on failed and discarded runs
Problem The Changes view and the delta counts are generated on plan, after apply the delta counts are not updated to reflect the run outcome, while the Changes view has its own issues. When apply fails, the UI gives little to no indication of what actually happened. This is misleading for users reviewing completed runs. Delta counts only reflect the plan The + N ~ N - N delta shown in the Changes tab header within a run, and in the tracked runs list, is calculated from the plan and never updated after apply. In the run details view, this is somewhat understandable as a summary of what the plan proposed. But in the tracked runs list, the delta sits right next to the run's status badge. A row showing FAILED and + 9 ~ 1 - 0 gives no indication of the runβs outcome. For unapplied or proposed runs the plan delta makes perfect sense, but for completed runs, especially discarded and failed ones, it becomes very deceptive. We understand these are simply plan deltas, but surfacing the actual apply outcome here would be a significant improvement and a very useful signal. There's also an inconsistency: proposed runs have plans but don't show a delta in the MR/PR list. If the delta is meant to represent plan output rather than apply outcome, it should appear on proposed runs too. Showing it consistently across all runs would at least help users learn to read it as "what was planned" rather than "what happened." Why this matters For runs that fail or only partially apply, the UI gives no useful indication of what actually happened. The only way to find this out is to read the raw apply logs. In some cases, the gap between what the UI shows and what actually happened can be dangerous. This is compounded by the fact that run logs expire. Since the Changes view cannot be relied on, once the Terraform output is gone, the delta counts might be the only record of what a run did, making it very easy for users to read plan deltas as apply outcomes. Suggested improvements Fix the Changes view. Consider updating delta counts after apply, or showing both planned and applied/failed counts (e.g., "Planned: +9 ~1 | Failed: +2 ~1"), especially valuable given that these counts outlive the logs. The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. Show plan deltas consistently across all run states that have a plan (including proposed runs), so users can learn to read them as plan output rather than apply outcome.
π‘ Feature Requests
15 days ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (βplannedβ can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
π Feedback
15 days ago
UI/UX
Changes view shows "SUCCESSFUL" badges on failed resources
Problem When a run's apply phase partially fails, the Changes view and the delta counts do not reflect what actually happened. This is confusing and misleading for users reviewing completed runs. "SUCCESSFUL" badges on resources that failed to apply This is a critical issue. The Changes view often shows a green SUCCESSFUL badge on every resource that was part of a confirmed plan, including resources that errored during apply. Notably, when a planned run is discarded, the badges stay at PLANNED. So the current behavior, as confirmed by Spacelift, is that "SUCCESSFUL" means "the plan was applied", this is factually wrong for resources that failed during apply. In the context of a finished run, on a tab called "Changes," a green SUCCESSFUL badge next to a resource means one thing to every user: this resource was successfully changed. When any number of resources shown in the delta actually failed to apply, showing all of them as SUCCESSFUL is actively incorrect and utterly confusing. Why this matters For runs where apply partially succeeds, the gap between what the UI shows and what actually happened is dangerous, users may assume changes landed when they didn't, or miss that resources are in a broken state. The only way to find out what actually happened is to read the raw apply logs; the UI provides no help. This is compounded by the fact that run logs expire. Once the Terraform output is gone, the delta counts and Changes view become the only record of what a run did, and right now they can be wrong. Suggested improvements Update the Changes view after apply: show per-resource apply outcome: failed, successful, or not attempted (βplannedβ can be equally confusing to less-experienced users who may miss the fact that a run was discarded). The current plan-only view is fine for unconfirmed/discarded runs, but once apply runs, the badges should reflect reality, providing correct and durable information about the outcome. Currently, this isnβt available anywhere else once logs expire. At minimum, stop labeling failed resources as SUCCESSFUL. If apply errored on a resource, that resource is not successful by any definition.
π Feedback
15 days ago
UI/UX
TTL on Intent Projects with automatic resource cleanup
Expose a ttl (or expires_at) property on Intent Projects with the following behavior: Configuration β TTL can be set at project creation or updated later, expressed either as a duration (72h, 7d, 30d) or an absolute timestamp. It should be settable via UI, API, GraphQL, and the Terraform provider. Default & policy control β Allow administrators to set a default TTL per space, and/or enforce a maximum TTL via an Intent policy (e.g., "all Intent Projects in the dev space must have a TTL β€ 14 days"). Notifications β Send notifications (email/Slack via existing notification policies) to the project owner at configurable thresholds before expiration (e.g., 72h, 24h, 1h) with a one-click "extend TTL" action. Expiration behavior β When the TTL elapses, Spacelift automatically performs a destroy of all resources managed by the project in dependency order, subject to existing Intent policies (so a policy denial on delete is still respected and surfaced). The full cleanup run is recorded on the project's History tab for auditability, matching how deletions are already tracked. Project lifecycle after cleanup β Configurable: either archive the project (keeping history/receipts for audit) or fully delete it. Archive should be the default. Lock & grace handling β If a project is actively locked by a user session when TTL expires, delay cleanup until the lock releases (or a configurable grace period elapses), and notify the lock holder.
π‘ Feature Requests
10 days ago
TTL on Intent Projects with automatic resource cleanup
Expose a ttl (or expires_at) property on Intent Projects with the following behavior: Configuration β TTL can be set at project creation or updated later, expressed either as a duration (72h, 7d, 30d) or an absolute timestamp. It should be settable via UI, API, GraphQL, and the Terraform provider. Default & policy control β Allow administrators to set a default TTL per space, and/or enforce a maximum TTL via an Intent policy (e.g., "all Intent Projects in the dev space must have a TTL β€ 14 days"). Notifications β Send notifications (email/Slack via existing notification policies) to the project owner at configurable thresholds before expiration (e.g., 72h, 24h, 1h) with a one-click "extend TTL" action. Expiration behavior β When the TTL elapses, Spacelift automatically performs a destroy of all resources managed by the project in dependency order, subject to existing Intent policies (so a policy denial on delete is still respected and surfaced). The full cleanup run is recorded on the project's History tab for auditability, matching how deletions are already tracked. Project lifecycle after cleanup β Configurable: either archive the project (keeping history/receipts for audit) or fully delete it. Archive should be the default. Lock & grace handling β If a project is actively locked by a user session when TTL expires, delay cleanup until the lock releases (or a configurable grace period elapses), and notify the lock holder.
π‘ Feature Requests
10 days ago
Auto-attachment of policies to Intent Projects via labels
Allow Intent policies (and ideally also contexts and integrations on Intent Projects) to use the existing autoattach: label convention. When an Intent Project is created or updated with a matching label, any policy carrying autoattach: should be automatically attached. Concretely: An Intent policy labeled autoattach:intent-baseline should auto-attach to every Intent Project labeled intent-baseline. The wildcard form autoattach:* should attach a policy to all Intent Projects within the policy's space (and child spaces), matching the behavior described in the Context docs. Auto-attached policies should be visible in the Intent Project's Policies tab, clearly marked as auto-attached (same UX as stacks). Support should extend to the Terraform provider and GraphQL API so that Intent Projects can be governed declaratively from an administrative stack.
π‘ Feature Requests
10 days ago
Auto-attachment of policies to Intent Projects via labels
Allow Intent policies (and ideally also contexts and integrations on Intent Projects) to use the existing autoattach: label convention. When an Intent Project is created or updated with a matching label, any policy carrying autoattach: should be automatically attached. Concretely: An Intent policy labeled autoattach:intent-baseline should auto-attach to every Intent Project labeled intent-baseline. The wildcard form autoattach:* should attach a policy to all Intent Projects within the policy's space (and child spaces), matching the behavior described in the Context docs. Auto-attached policies should be visible in the Intent Project's Policies tab, clearly marked as auto-attached (same UX as stacks). Support should extend to the Terraform provider and GraphQL API so that Intent Projects can be governed declaratively from an administrative stack.
π‘ Feature Requests
10 days ago
Concurrency Limits Per Context / Provider
Currently, Spacelift does not support limiting the number of concurrent runs that share a specific context or hit a specific external provider/API. The only way to throttle this today is by isolating stacks into dedicated worker pools of size 1.
π‘ Feature Requests
12 days ago
Concurrency Limits Per Context / Provider
Currently, Spacelift does not support limiting the number of concurrent runs that share a specific context or hit a specific external provider/API. The only way to throttle this today is by isolating stacks into dedicated worker pools of size 1.
π‘ Feature Requests
12 days ago
Support private Docker runner images on the public worker pool
Allow stacks using the public (shared) worker pool to pull runner images from private container registries (e.g. ECR, Docker Hub private, GCR), rather than requiring images to be publicly accessible. Currently, customers on the public worker pool must publish their runner images to a public registry.Β With how docker images currently work, this is intentional as the Docker daemon is shared across tenants, so cached image layers could be accessible to other customers. Private worker pools don't have this constraint since the user owns the infrastructure. ( https://docs.spacelift.io/integrations/docker.html#using-private-docker-images) But in the userβs situation they would rather accept the risk of their image being leaked via the Docker daemon to other Spacelift customers rather than having to publish the image to a public registry.
π‘ Feature Requests
about 22 hours ago
Support private Docker runner images on the public worker pool
Allow stacks using the public (shared) worker pool to pull runner images from private container registries (e.g. ECR, Docker Hub private, GCR), rather than requiring images to be publicly accessible. Currently, customers on the public worker pool must publish their runner images to a public registry.Β With how docker images currently work, this is intentional as the Docker daemon is shared across tenants, so cached image layers could be accessible to other customers. Private worker pools don't have this constraint since the user owns the infrastructure. ( https://docs.spacelift.io/integrations/docker.html#using-private-docker-images) But in the userβs situation they would rather accept the risk of their image being leaked via the Docker daemon to other Spacelift customers rather than having to publish the image to a public registry.
π‘ Feature Requests
about 22 hours ago
β¬οΈ Gathering votes
Add a visual batch in the stack UI for forced-apply.
When creating a run, now we have the option to force apply that single stack or in cascade. The issue surfaces when the run is queued or ready, because we cannot know that that option was enabled. It is only shown after the plan phase, because it shows the apply phase with the force applied. I think it would be very useful and interesting to have the ability to see that somewhere in the stack of runs and also in each individual run. Together with this, I think that should be added to the GraphQL API for when listing the runs for the workers queue, and add the filter in the workers queue UI.
π‘ Feature Requests
16 days ago
UI/UX
β¬οΈ Gathering votes
Add a visual batch in the stack UI for forced-apply.
When creating a run, now we have the option to force apply that single stack or in cascade. The issue surfaces when the run is queued or ready, because we cannot know that that option was enabled. It is only shown after the plan phase, because it shows the apply phase with the force applied. I think it would be very useful and interesting to have the ability to see that somewhere in the stack of runs and also in each individual run. Together with this, I think that should be added to the GraphQL API for when listing the runs for the workers queue, and add the filter in the workers queue UI.
π‘ Feature Requests
16 days ago
UI/UX
Allow creating a custom role that can manage Spacelift policies
We would like to be able to create a custom role that is able to create/update/delete policies. Currently the only way to grant stacks or users permissions to manage our Spacelift policies is through the default admin role.
π‘ Feature Requests
2 days ago
Access Control
Allow creating a custom role that can manage Spacelift policies
We would like to be able to create a custom role that is able to create/update/delete policies. Currently the only way to grant stacks or users permissions to manage our Spacelift policies is through the default admin role.
π‘ Feature Requests
2 days ago
Access Control
Add read/write integration affinity to auto-attach labels
It is not currently possible to associate auto-attached integrations specifically with read or write roles for a stack. If I auto-attach a read integration and a write integration, Spacelift gets confused and tries to write with the read integration sometimes.
π‘ Feature Requests
3 days ago
Add read/write integration affinity to auto-attach labels
It is not currently possible to associate auto-attached integrations specifically with read or write roles for a stack. If I auto-attach a read integration and a write integration, Spacelift gets confused and tries to write with the read integration sometimes.
π‘ Feature Requests
3 days ago
Expose authenticated user's team membership in Blueprint/Template context
Current state Available in policies β session.teams. Login and approval policies already receive the signed-in user's team membership. Example from one of our approval-policy inputs: "session": { "login": "jane.doe@example.com", "name": "jane.doe@example.com", "admin": true, "teams": ["developers", "admins"] } The same shape appears as run.creator_session.teams for the user who triggered a run (e.g. ["developers"]). So the OIDC β Spacelift team mapping is already piped into policy evaluation β this request is to pipe the same value into Blueprint/Template rendering. The artifact we want produced automatically. Today our stacks end up with labels like: "labels": ["Owner: john.doe@example.com", "Team: developers", "Project: ...", ...] The Team: developers portion is the one filled from the manual dropdown. That's the value we want auto-populated from the user's teams. Not available in Templates and Blueprints. Per https://docs.spacelift.io/concepts/template/configuration#available-context-properties, the only user fields exposed are context.user.login, context.user.name, and context.user.account. There is no team/group field. Proposed API Add context.user.teams β an array of strings β mirroring the session.teams shape that login/approval policies already receive. Same source of truth, same naming, no new mental model for users or operators. Example Blueprint snippet (replacing the manual dropdown): inputs: # (no more `team` input β derived automatically) - id: project name: Project name type: short_text stack: name: ${ inputs.project }-${ context.random_string } labels: - "Owner: ${ context.user.login }" - "Team: ${ context.user.teams[0] }" # primary team - "Project: ${ inputs.project }" terraform: variables: - name: tags value: | { "Owner" = "${ context.user.login }" "Team" = "${ context.user.teams[0] }" } Open design question for Spacelift: how to handle users with multiple mapped teams. Reasonable options: expose the full array and let Blueprint authors choose (`teams[0]`, join(",", teams), etc.); offer a convenience context.user.primary_team based on group ordering; or let the Blueprint author declare a preferred team via a filter expression. We'd be happy with the first (full array) β it composes well with existing template functions and defers the policy decision to the Blueprint author. Use cases - Auto-tagging cloud resources for cost allocation / showback (primary). - Auto-populating stack labels (`Team: β¦`, Owner: β¦) directly from identity, instead of free-text input. - Conditional Blueprint logic β e.g., only members of a specific team (say, platform) can select certain backends or regions. - Audit & compliance β the team that created a stack is provable from the OIDC identity rather than from a value the user typed in. Why this matters - Cost data becomes accurate by default instead of being guarded by a manual step that often goes wrong. - Eliminates a class of human error (mis-selected team, blank input). - Removes the maintenance chore of keeping Blueprint dropdowns in sync with OIDC group changes. - Reuses the trust boundary Spacelift already enforces (the OIDC β team mapping that drives login/approval policies) instead of asking us to re-establish it inside every Blueprint as a free-text input.
π‘ Feature Requests
3 days ago
Blueprints
Expose authenticated user's team membership in Blueprint/Template context
Current state Available in policies β session.teams. Login and approval policies already receive the signed-in user's team membership. Example from one of our approval-policy inputs: "session": { "login": "jane.doe@example.com", "name": "jane.doe@example.com", "admin": true, "teams": ["developers", "admins"] } The same shape appears as run.creator_session.teams for the user who triggered a run (e.g. ["developers"]). So the OIDC β Spacelift team mapping is already piped into policy evaluation β this request is to pipe the same value into Blueprint/Template rendering. The artifact we want produced automatically. Today our stacks end up with labels like: "labels": ["Owner: john.doe@example.com", "Team: developers", "Project: ...", ...] The Team: developers portion is the one filled from the manual dropdown. That's the value we want auto-populated from the user's teams. Not available in Templates and Blueprints. Per https://docs.spacelift.io/concepts/template/configuration#available-context-properties, the only user fields exposed are context.user.login, context.user.name, and context.user.account. There is no team/group field. Proposed API Add context.user.teams β an array of strings β mirroring the session.teams shape that login/approval policies already receive. Same source of truth, same naming, no new mental model for users or operators. Example Blueprint snippet (replacing the manual dropdown): inputs: # (no more `team` input β derived automatically) - id: project name: Project name type: short_text stack: name: ${ inputs.project }-${ context.random_string } labels: - "Owner: ${ context.user.login }" - "Team: ${ context.user.teams[0] }" # primary team - "Project: ${ inputs.project }" terraform: variables: - name: tags value: | { "Owner" = "${ context.user.login }" "Team" = "${ context.user.teams[0] }" } Open design question for Spacelift: how to handle users with multiple mapped teams. Reasonable options: expose the full array and let Blueprint authors choose (`teams[0]`, join(",", teams), etc.); offer a convenience context.user.primary_team based on group ordering; or let the Blueprint author declare a preferred team via a filter expression. We'd be happy with the first (full array) β it composes well with existing template functions and defers the policy decision to the Blueprint author. Use cases - Auto-tagging cloud resources for cost allocation / showback (primary). - Auto-populating stack labels (`Team: β¦`, Owner: β¦) directly from identity, instead of free-text input. - Conditional Blueprint logic β e.g., only members of a specific team (say, platform) can select certain backends or regions. - Audit & compliance β the team that created a stack is provable from the OIDC identity rather than from a value the user typed in. Why this matters - Cost data becomes accurate by default instead of being guarded by a manual step that often goes wrong. - Eliminates a class of human error (mis-selected team, blank input). - Removes the maintenance chore of keeping Blueprint dropdowns in sync with OIDC group changes. - Reuses the trust boundary Spacelift already enforces (the OIDC β team mapping that drives login/approval policies) instead of asking us to re-establish it inside every Blueprint as a free-text input.
π‘ Feature Requests
3 days ago
Blueprints
Add a --no-open option to `spacectl profile login`
In environments like WSL2, spacectl can try to opne a browser for autheenticaiton, when that is less than desirable. Adding some sort of no open option to just give the URL to click on would be great, and avoid trying to start udesired browers in the VM. I imagine there are other situations too where trying to start a browser would be undesirable here, but I dont see a current option in the help text to stop it happening.
π‘ Feature Requests
3 days ago
Add a --no-open option to `spacectl profile login`
In environments like WSL2, spacectl can try to opne a browser for autheenticaiton, when that is less than desirable. Adding some sort of no open option to just give the URL to click on would be great, and avoid trying to start udesired browers in the VM. I imagine there are other situations too where trying to start a browser would be undesirable here, but I dont see a current option in the help text to stop it happening.
π‘ Feature Requests
3 days ago
Export Additional Metrics to spacelift-promex
We are heavy users of Grafana and Prometheus for observability and alerting. The current metrics that Spacelift exposes to prometheus today is very limiting and we want to build out a rich set of dashboards that give insight into platform performance and also the developer experience. We noticed that only ~5/36 graphql metrics are being exposed in the spacelift-promex Asks: Ensure that spacelift-promex is exporting all metrics available in the metrics API Be able to adjust histogram buckets for all relevant metrics
π‘ Feature Requests
5 days ago
Observability
Export Additional Metrics to spacelift-promex
We are heavy users of Grafana and Prometheus for observability and alerting. The current metrics that Spacelift exposes to prometheus today is very limiting and we want to build out a rich set of dashboards that give insight into platform performance and also the developer experience. We noticed that only ~5/36 graphql metrics are being exposed in the spacelift-promex Asks: Ensure that spacelift-promex is exporting all metrics available in the metrics API Be able to adjust histogram buckets for all relevant metrics
π‘ Feature Requests
5 days ago
Observability
Admin page that shows ALL recent runs
Spacelift Self-Hosted needs an Admin page that shows ALL recent completed/pending/in-progress runs. This would be helpful when finding recent activity and debugging performance / utilization issues. The current βRunsβ interface only allows you to see one type of activity.
π‘ Feature Requests
5 days ago
Observability
Admin page that shows ALL recent runs
Spacelift Self-Hosted needs an Admin page that shows ALL recent completed/pending/in-progress runs. This would be helpful when finding recent activity and debugging performance / utilization issues. The current βRunsβ interface only allows you to see one type of activity.
π‘ Feature Requests
5 days ago
Observability