Installation — Deploy KubeStellar Console

This guide covers all deployment options for KubeStellar Console, the multi-cluster Kubernetes dashboard with AI-powered operations.

Try it first! See a live preview at kubestellarconsole.netlify.app


Prerequisites and resource requirements

Before installing into a Kubernetes cluster, make sure your target meets these requirements. For local-only evaluation (curl one-liner or run from source) you only need the entries marked Local.

RequirementMinimumNotes
Kubernetes version1.28+Matches the Pod Security restricted profile the chart targets. Tested on 1.28 – 1.31.
Default StorageClassOne must existNeeded when persistence.enabled=true (the default). Disable persistence on clusters without one. See Troubleshooting → PVC stuck Pending.
Node CPU (request)250 mBurstable — no hard limit set by the chart.
Node memory (request)256 MiStartup probe takes ~30 s on cold start.
Node memory (recommended)512 Mi+Real clusters with many contexts.
Ephemeral / PVC storage1 GiSQLite database + backup snapshots.
Service port8080The service listens on 8080, not 80. Port-forward with 8080:8080.
Namespace PodSecuritybaseline or restricted OKThe chart is compliant with restricted out of the box.
GitHub OAuth AppOptionalOnly required for multi-user logins; omit for demo or single-user local.
Local: Go1.24+Only for “run from source”.
Local: Node.js20+Only for “run from source”.
Local: kubectllatest
Local: kubeconfig≥ 1 contextkubectl config get-contexts must list at least one context.

Fastest Path

Prerequisites: You must install the kubestellar-mcp plugins before running this command — they are not installed by start.sh. See Step 1: Install Claude Code Plugins first.

One command downloads pre-built binaries, starts the backend + agent, and opens your browser:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash

This downloads and starts the console binary only. It does not install kubestellar-mcp plugins. Typically takes under 45 seconds. No OAuth or GitHub credentials required — you get a local dev-user session automatically.


System Components

KubeStellar Console has 7 components that work together. For the full architectural deep-dive, data flow diagrams, and component interactions, see the Architecture page.

Component Summary

#ComponentWhat it doesRequired?
1GitHub OAuth AppLets users sign in with GitHubOptional — without it, a local dev-user session is created
2FrontendReact web app you see in browserYes — included in the console executable
3BackendGo server that handles API callsYes — included in the console executable
4MCP BridgeHosts kubestellar-ops and kubestellar-deploy MCP servers; Backend queries them for cluster dataYes — spawned as a child process by the console executable
5AI Coding Agent + PluginsAny MCP-compatible AI coding agent (Claude Code, Copilot, Cursor, Gemini CLI) with kubestellar-ops/deploy pluginsYes — Claude Marketplace or Homebrew
6kc-agentLocal MCP+WebSocket server on port 8585 for kubectl executionYes — spawned by the console executable
7KubeconfigYour cluster credentialsYes — your existing ~/.kube/config

Installation Steps

Step 1: Install Claude Code Plugins

The console uses kubestellar-mcp plugins to talk to your clusters. See the full kubestellar-mcp documentation for details.

Option A: Install from Claude Code Marketplace (recommended)

# In Claude Code, run:
/plugin marketplace add kubestellar/claude-plugins

Then:

  1. Go to /pluginMarketplaces tab → click Update
  2. Go to /pluginDiscover tab
  3. Install kubestellar-ops and kubestellar-deploy

Verify with /mcp - you should see:

plugin:kubestellar-ops:kubestellar-ops · ✓ connected
plugin:kubestellar-deploy:kubestellar-deploy · ✓ connected

Option B: Install via Homebrew (source: homebrew-tap)

brew tap kubestellar/tap
brew install kubestellar-ops kubestellar-deploy

Step 2: Set Up Kubeconfig

The console reads clusters from your kubeconfig. Make sure you have access:

# List your clusters
kubectl config get-contexts
 
# Test access to a cluster
kubectl --context=your-cluster get nodes

To add more clusters, merge kubeconfigs:

KUBECONFIG=~/.kube/config:~/.kube/cluster2.yaml kubectl config view --flatten > ~/.kube/merged
mv ~/.kube/merged ~/.kube/config

Step 3: Deploy the Console

Choose your deployment method:


Curl Quickstart

Downloads pre-built binaries and starts the console:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash

This starts the backend (port 8080) and opens the frontend in your browser. No OAuth credentials needed — a local dev-user session is created automatically.


Run from Source

For contributors or if you want to build from source. No GitHub OAuth required.

Prerequisites

  • Go 1.24+
  • Node.js 20+
  • kubestellar-ops and kubestellar-deploy installed (see Step 1)

Setup

git clone https://github.com/kubestellar/console.git
cd console
./start-dev.sh

This compiles the Go backend, installs npm dependencies, starts a Vite dev server on port 5174, and creates a local dev-user session (no GitHub login needed).

Open http://localhost:5174


Run from Source with OAuth

To enable GitHub login (for multi-user deployments or to test the full auth flow):

1. Create a GitHub OAuth App

  1. Go to GitHub Developer SettingsOAuth AppsNew OAuth App

  2. Fill in:

    • Application name: KubeStellar Console
    • Homepage URL: http://localhost:8080
    • Authorization callback URL: http://localhost:8080/auth/github/callback
  3. Click Register application

  4. Copy the Client ID and generate a Client Secret

2. Clone the Repository

git clone https://github.com/kubestellar/console.git
cd console

3. Configure Environment

Create a .env file inside the cloned console/ directory (the repo root) with your OAuth credentials:

GITHUB_CLIENT_ID=your_client_id
GITHUB_CLIENT_SECRET=your_client_secret
FEEDBACK_GITHUB_TOKEN=ghp_your_personal_access_token

Recommended: FEEDBACK_GITHUB_TOKEN is a GitHub Personal Access Token (PAT) with public_repo scope that enables users to submit bug reports, feature requests, and feedback directly from the console. Without it, the in-app feedback and issue submission features are disabled. We strongly encourage setting this token so your users can contribute feedback seamlessly. You can create one at GitHub Settings → Tokens.

Important: The .env file must be in the same directory as startup-oauth.sh. The script loads it from its own directory, so creating it elsewhere will not work.

4. Start the Console

./startup-oauth.sh

Open http://localhost:8080 and sign in with GitHub.

Tip: Once running, click your profile avatar → the Developer panel shows your OAuth status, console version, and quick links.

EnvironmentCallback URL
Local devhttp://localhost:8080/auth/github/callback
Kuberneteshttps://console.your-domain.com/auth/github/callback
OpenShifthttps://ksc.apps.your-cluster.com/auth/github/callback

Helm Installation

1. Add Secrets

Create a secret with your OAuth credentials:

kubectl create namespace ksc
 
kubectl create secret generic ksc-secrets \
  --namespace ksc \
  --from-literal=github-client-id=YOUR_CLIENT_ID \
  --from-literal=github-client-secret=YOUR_CLIENT_SECRET

Recommended: Add a FEEDBACK_GITHUB_TOKEN to enable in-app feedback and issue submission. This is a GitHub Personal Access Token (PAT) with public_repo scope that allows users to submit bug reports, feature requests, and feedback directly from the console UI. Without it, these features are disabled. We strongly encourage including this token. You can create one at GitHub Settings → Tokens.

Optionally add Claude API key for AI features and the feedback token:

kubectl create secret generic ksc-secrets \
  --namespace ksc \
  --from-literal=github-client-id=YOUR_CLIENT_ID \
  --from-literal=github-client-secret=YOUR_CLIENT_SECRET \
  --from-literal=claude-api-key=YOUR_CLAUDE_API_KEY \
  --from-literal=feedback-github-token=YOUR_FEEDBACK_GITHUB_TOKEN

2. Install Chart

From GitHub Container Registry:

helm install ksc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets

From source:

git clone https://github.com/kubestellar/console.git
cd console
 
helm install ksc ./deploy/helm/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets

3. Access the Console

Port forward (development):

Run the port-forward in the foreground in a dedicated terminal. This is the simplest pattern — press Ctrl+C to stop it, and there is no orphaned background process holding port 8080.

kubectl port-forward -n ksc svc/ksc-kubestellar-console 8080:8080

Open <http://localhost:8080> in another terminal or your browser.

Do not background the port-forward with a trailing & in copy-paste instructions (e.g. kubectl port-forward ... 8080:8080 &). It leaks the process, leaves port 8080 held after the shell exits, and causes “port already in use” errors on re-runs. If you genuinely need to run it in the background from a script, capture the PID and clean it up on exit:

kubectl port-forward -n ksc svc/ksc-kubestellar-console 8080:8080 &
PF_PID=$!
trap 'kill "$PF_PID" 2>/dev/null || true' EXIT INT TERM
# ... do work that needs the port-forward ...

Ingress (production):

helm upgrade ksc ./deploy/helm/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets \
  --set ingress.enabled=true \
  --set ingress.hosts[0].host=ksc.your-domain.com

4. Run kc-agent Locally

The Helm chart deploys the console backend inside your cluster, but kc-agent is not included in the Helm deployment. kc-agent is a lightweight local process that bridges your browser to your local kubeconfig via WebSocket and MCP. You must run it separately on your workstation.

Install kc-agent:

# Via Homebrew
brew tap kubestellar/tap
brew install kc-agent

Start kc-agent:

kc-agent

This starts the agent on port 8585. It reads your local ~/.kube/config and exposes kubectl execution over WebSocket (for the browser console) and MCP (for AI coding agents).

Why local? kc-agent runs on your machine because it needs direct access to your kubeconfig and kubectl. The in-cluster console connects to kc-agent over WebSocket to execute commands against clusters that are only reachable from your workstation.

Without kc-agent: The console will still load, but cluster interactions that require kubectl (terminal commands, AI missions that modify resources) will not work. If the console was deployed without OAuth, it will fall back to demo mode. See Architecture for details.

OpenShift Installation

OpenShift uses Routes instead of Ingress:

helm install ksc ./deploy/helm/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets \
  --set route.enabled=true \
  --set route.host=ksc.apps.your-cluster.com

The console will be available at https://ksc.apps.your-cluster.com

Docker Installation

For single-node or development deployments:

docker run -d \
  --name ksc \
  -p 8080:8080 \
  -e GITHUB_CLIENT_ID=your_client_id \
  -e GITHUB_CLIENT_SECRET=your_client_secret \
  -e FEEDBACK_GITHUB_TOKEN=ghp_your_personal_access_token \
  -v ~/.kube:/root/.kube:ro \
  -v ksc-data:/app/data \
  ghcr.io/kubestellar/console:latest

Kubernetes Deployment via Script

One command that handles helm, secrets, and ingress:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bash

Supports --context, --openshift, --ingress <host>, and --github-oauth flags.

Multi-Cluster Access

The console reads clusters from your kubeconfig. To access multiple clusters:

  1. Merge kubeconfigs:

    KUBECONFIG=~/.kube/config:~/.kube/cluster2.yaml kubectl config view --flatten > ~/.kube/merged
    mv ~/.kube/merged ~/.kube/config
  2. Mount merged config in container/pod

  3. Verify access:

    kubectl config get-contexts

Kind quickstart (zero to browser)

A full local path from nothing to a running console in a Kind cluster. Tested on Kind v0.27 and Kubernetes 1.31.

# 1. Create a Kind cluster
kind create cluster --name kc-demo
 
# 2. Pre-pull the console image into Kind to avoid deploy.sh timeouts
docker pull ghcr.io/kubestellar/console:latest
kind load docker-image ghcr.io/kubestellar/console:latest --name kc-demo
 
# 3. Install the chart with no overrides — JWT secret auto-generates,
#    everything else falls back to demo mode.
kubectl create namespace kubestellar-console
 
helm install kc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  -n kubestellar-console \
  --wait --timeout 10m
 
# 4. Verify (see "Verification commands" below for full checks)
kubectl -n kubestellar-console rollout status deploy \
  -l app.kubernetes.io/name=kubestellar-console --timeout=300s
 
# 5. Port-forward — service port is 8080, NOT 80
kubectl -n kubestellar-console port-forward svc/kc-kubestellar-console 8080:8080

Open http://localhost:8080. Because no GitHub OAuth was configured, you’ll land directly in demo mode.

Tear down:

helm uninstall kc -n kubestellar-console
kind delete cluster --name kc-demo

If helm install fails with context deadline exceeded, see Troubleshooting → deploy.sh timeouts — pre-pulling and loading the image (step 2 above) is the standard workaround.

Minikube quickstart (zero to browser)

Same idea as Kind, on Minikube. Tested on Minikube v1.35 with the default docker driver.

# 1. Create a Minikube profile
minikube start -p kc-demo --memory=4096 --cpus=2
 
# 2. Load the image into Minikube's Docker
minikube -p kc-demo image load ghcr.io/kubestellar/console:latest
 
# 3. Install the chart
kubectl create namespace kubestellar-console
 
helm install kc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  -n kubestellar-console \
  --wait --timeout 10m
 
# 4. Verify
kubectl -n kubestellar-console rollout status deploy \
  -l app.kubernetes.io/name=kubestellar-console --timeout=300s
 
# 5. Port-forward
kubectl -n kubestellar-console port-forward svc/kc-kubestellar-console 8080:8080

Open http://localhost:8080.

Minikube ships with a default standard StorageClass, so the default persistence.enabled=true works without any extra setup. If you’re on a stripped-down profile without storage, add --set persistence.enabled=false and --set backup.enabled=false.

Tear down:

helm uninstall kc -n kubestellar-console
minikube delete -p kc-demo

Verification commands

After any install, run these to confirm everything is healthy. These are the same commands Troubleshooting tells you to run before opening a support issue.

NS=kubestellar-console
 
# 1. Deployment rolled out
kubectl -n "$NS" rollout status deploy \
  -l app.kubernetes.io/name=kubestellar-console --timeout=180s
 
# 2. Pods Ready 1/1
kubectl -n "$NS" get pods -l app.kubernetes.io/name=kubestellar-console
 
# 3. PVC bound (if persistence.enabled=true — the default)
kubectl -n "$NS" get pvc
 
# 4. Service exists on port 8080 and has at least one endpoint
kubectl -n "$NS" get svc,endpoints
 
# 5. No errors in the last 200 log lines
kubectl -n "$NS" logs -l app.kubernetes.io/name=kubestellar-console \
  --tail=200 --all-containers
 
# 6. HTTP health check through the port-forward
kubectl -n "$NS" port-forward svc/kc-kubestellar-console 8080:8080 &
sleep 2
curl -sSf http://localhost:8080/api/health && echo OK

kc-agent health (Helm / in-cluster mode only)

kc-agent runs on your workstation, not in the cluster. After starting kc-agent, verify it:

# Process is running and listening on 8585
lsof -nP -iTCP:8585 -sTCP:LISTEN
 
# Agent responds to a health probe
curl -sSf http://127.0.0.1:8585/healthz && echo OK

If kc-agent is not running, the console will show an “Agent Not Connected” banner. See Troubleshooting → Agent Not Connected.

Values and secrets reference

The chart accepts secret material in one of two modes. The full list lives in the chart README; the common keys are:

ValueDefaultexistingSecret alternativeAuto-generated?
github.clientId / github.clientSecret(empty)github.existingSecret + github.existingSecretKeys.clientId / .clientSecretNo — OAuth disabled if unset
jwt.secret(empty)jwt.existingSecret + jwt.existingSecretKey (default jwt-secret)Yes — chart generates a 64-char random value on first install
googleDrive.apiKey(empty)googleDrive.existingSecret + googleDrive.existingSecretKeyNo — benchmark cards fall back to demo data
claude.apiKey(empty)claude.existingSecret + claude.existingSecretKeyNo — AI features disabled
feedbackGithubToken.token(empty)feedbackGithubToken.existingSecret + feedbackGithubToken.existingSecretKeyNo — in-app feedback disabled

Secret creation — mode 1: chart-managed

The chart renders a Secret named {release-name}-kubestellar-console for you. Pass values inline:

helm install kc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  -n kubestellar-console --create-namespace \
  --set github.clientId=YOUR_CLIENT_ID \
  --set github.clientSecret=YOUR_CLIENT_SECRET

The JWT secret is auto-generated; you don’t need to set anything.

Secret creation — mode 2: bring-your-own

Create the Secret before helm install — if you pass *.existingSecret for a Secret that doesn’t exist, the pod fails with CreateContainerConfigError. The chart does not create it for you.

kubectl create namespace kubestellar-console
 
kubectl -n kubestellar-console create secret generic kc-oauth-secret \
  --from-literal=github-client-id="YOUR_CLIENT_ID" \
  --from-literal=github-client-secret="YOUR_CLIENT_SECRET" \
  --from-literal=jwt-secret="$(openssl rand -hex 32)"
 
helm install kc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  -n kubestellar-console \
  --set github.existingSecret=kc-oauth-secret \
  --set jwt.existingSecret=kc-oauth-secret

The default key names the chart expects are github-client-id, github-client-secret, and jwt-secret. If your Secret uses different keys, override github.existingSecretKeys.clientId, github.existingSecretKeys.clientSecret, and jwt.existingSecretKey accordingly.

JWT secret behavior (by mode)

ScenarioWhat happens
Neither jwt.secret nor jwt.existingSecret set (default)Chart generates a 64-char random JWT secret on first install and reuses it on upgrades.
jwt.secret set inlineChart uses that value. Changing it rotates the key and invalidates active sessions.
jwt.existingSecret setChart reads key jwt-secret (or jwt.existingSecretKey) from the named Secret. The Secret must exist first.

FEEDBACK_GITHUB_TOKEN — enables in-app feedback

The in-app feedback / /issue flow posts to GitHub on the user’s behalf. It requires a GitHub Personal Access Token with public_repo scope. In the Helm chart it’s feedbackGithubToken.token (or feedbackGithubToken.existingSecret). In local dev it’s the FEEDBACK_GITHUB_TOKEN environment variable or .env entry. Without it, the feedback buttons in the UI are disabled.

Data persistence and storage behavior

By default the chart sets:

  • persistence.enabled: true — a PVC is created for the SQLite database that holds sessions, user preferences, and the feedback queue.
  • backup.enabled: true — a CronJob periodically snapshots the SQLite database into a backup volume, and an init container restores the latest snapshot on pod startup.

This means:

  • On clusters without a default StorageClass, the pod will stay Pending until the PVC is bound. Set persistence.enabled=false and backup.enabled=false for a stateless evaluation install.
  • helm uninstall does not delete PVCs by default. Run kubectl -n kubestellar-console delete pvc -l app.kubernetes.io/instance=kc if you want a fresh install.
  • On local clusters (Kind, Minikube) the default StorageClass uses volumeBindingMode: WaitForFirstConsumer, which means the PV is not provisioned until a pod requests it. This is expected and not a failure — only act if the PVC is still Pending after the pod exists.

See Persistence for the data model and backup CronJob details.

deploy.sh vs direct Helm

The deploy.sh convenience script wraps Helm plus a few extras. It is not a superset of Helm — behavior differs in ways users have hit:

Behaviordeploy.shDirect helm install
Installs the chartYes — wraps helm install/upgradeYes
Helm --waitHardcoded --timeout 120sYou control it
Creates namespaceYesOnly with --create-namespace
Creates GitHub OAuth SecretWith --github-oauthYou create it yourself
Configures IngressWith --ingress <host>Via --set ingress.*
Configures OpenShift RouteWith --openshiftVia --set route.*
Uses --contextYes, respects itRespects current kube context
Loads image into KindNoNo

Practical rule of thumb:

  • On Kind / Minikube / anywhere image pull might exceed 120 s, skip deploy.sh and use direct Helm with --wait --timeout 10m (see the Kind quickstart).
  • On real clusters where the image is already cached on the node, the deploy.sh --github-oauth --ingress one-liner is genuinely the fastest path.
  • In either case, deploy.sh abstracts which values it sets; read the script or pass the equivalent --set flags directly if you want the change visible in your shell history or GitOps diff.

Upgrading

helm upgrade ksc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  --namespace ksc \
  --reuse-values

Uninstalling

helm uninstall ksc --namespace ksc
kubectl delete namespace ksc

Troubleshooting

”MCP bridge failed to start”

Cause: kubestellar-ops or kubestellar-deploy plugins are not installed.

Solution: Follow Step 1: Install Claude Code Plugins or see the full kubestellar-mcp documentation.

# Via Homebrew
brew tap kubestellar/tap
brew install kubestellar-ops kubestellar-deploy

GitHub OAuth 404 or Blank Page

Cause: OAuth credentials not configured correctly.

Solutions:

  1. Verify the secret contains correct credentials
  2. Check callback URL matches exactly (see Run from Source with OAuth)
  3. View pod logs: kubectl logs -n ksc deployment/ksc-kubestellar-console

”GITHUB_CLIENT_SECRET is not set”

Cause: You’re running startup-oauth.sh without a .env file.

Solutions:

  1. Create a .env file with GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET (see Run from Source with OAuth)
  2. Or use ./start-dev.sh instead — it doesn’t require OAuth credentials

”exchange_failed” After GitHub Login

Cause: The Client Secret is wrong or has been regenerated.

Solutions:

  1. Go to GitHub Developer Settings → your OAuth App
  2. Generate a new Client Secret
  3. Update GITHUB_CLIENT_SECRET in your .env file
  4. Restart the console

”csrf_validation_failed”

Cause: The callback URL in GitHub doesn’t match the console’s URL.

Solutions:

  1. Verify the Authorization callback URL in your GitHub OAuth App settings matches exactly: http://localhost:8080/auth/github/callback
  2. Clear your browser cookies for localhost
  3. Restart the console

Clusters Not Showing

Cause: kubeconfig not mounted or MCP bridge not running.

Solutions:

  1. Verify kubeconfig is mounted in the pod
  2. Check MCP bridge status in logs
  3. Verify kubestellar-mcp tools are installed: which kubestellar-ops kubestellar-deploy

Plugin Shows Disconnected

Cause: Binary not in PATH or not working.

Solutions:

  1. Verify binary is installed: which kubestellar-ops
  2. Verify binary works: kubestellar-ops version
  3. Restart Claude Code

See kubestellar-mcp troubleshooting for more details.