Skip to content

Kubernetes

This guide extends the Quickstart to use a real Kubernetes cluster instead of the mock provider. By the end you will have submitted, approved, and executed a kubectl command using a temporary RBAC binding that jitsudo manages.

Before starting, it helps to understand how jitsudo’s abstract concepts map to Kubernetes:

jitsudo conceptKubernetes equivalent
--roleA ClusterRole name that must already exist in the cluster
--scopeA namespace (creates a RoleBinding) or * (creates a ClusterRoleBinding)
Grant issuedjitsudo creates a RoleBinding or ClusterRoleBinding for the requester’s identity
Grant expired/revokedjitsudo deletes the RoleBinding or ClusterRoleBinding
  • A working jitsudo local environment (from the Quickstart)
  • A Kubernetes cluster with kubectl configured and cluster-admin access
  • A kind or minikube cluster works perfectly for this guide
Terminal window
kubectl create namespace jitsudo-sandbox

jitsudo’s Kubernetes provider assumes the ClusterRole already exists — it creates and deletes RoleBinding objects, it does not create ClusterRole objects. Create the sandbox role:

Terminal window
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jitsudo-sandbox-viewer
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
EOF

Step 3: Create a Service Account for jitsudod

Section titled “Step 3: Create a Service Account for jitsudod”

jitsudod needs a service account with permission to create and delete RoleBinding and ClusterRoleBinding objects:

Terminal window
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: jitsudo-control-plane
namespace: jitsudo-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jitsudo-rbac-manager
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings", "clusterrolebindings"]
verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jitsudo-rbac-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jitsudo-rbac-manager
subjects:
- kind: ServiceAccount
name: jitsudo-control-plane
namespace: jitsudo-system
EOF

For local development, generate a kubeconfig for the jitsudo service account:

Terminal window
# Get the service account token
TOKEN=$(kubectl create token jitsudo-control-plane \
--namespace jitsudo-system \
--duration 24h)
# Get the cluster CA and server
CLUSTER_CA=$(kubectl config view --raw \
-o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
CLUSTER_SERVER=$(kubectl config view \
-o jsonpath='{.clusters[0].cluster.server}')
# Write a kubeconfig for jitsudod
cat > jitsudo-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
clusters:
- name: sandbox
cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_SERVER}
contexts:
- name: jitsudo
context:
cluster: sandbox
user: jitsudo
current-context: jitsudo
users:
- name: jitsudo
user:
token: ${TOKEN}
EOF

Add the Kubernetes provider block to your jitsudod.yaml:

providers:
kubernetes:
kubeconfig: "./jitsudo-kubeconfig.yaml"
max_duration: "2h"

Restart jitsudod with make docker-up (or your equivalent).

Terminal window
cat > k8s-eligibility.rego << 'EOF'
package jitsudo.eligibility
import future.keywords.if
default allow = false
default reason = "not authorized"
allow if {
input.user.groups[_] == "sre"
input.request.provider == "kubernetes"
input.request.role == "jitsudo-sandbox-viewer"
input.request.resource_scope == "jitsudo-sandbox"
input.request.duration_seconds <= 3600
}
reason = "allowed" if { allow }
EOF
jitsudo policy apply -f k8s-eligibility.rego \
--type eligibility \
--name k8s-sandbox-eligibility
Terminal window
jitsudo request \
--provider kubernetes \
--role jitsudo-sandbox-viewer \
--scope jitsudo-sandbox \
--duration 30m \
--reason "Testing real Kubernetes provider - sandbox"

The --scope value is the namespace. For a cluster-scoped binding, use --scope *.

You should see:

✓ Request submitted (ID: req_01...)
⏳ Awaiting approval

In a second terminal:

Terminal window
jitsudo approve req_01...

Step 8: Verify the RoleBinding Was Created

Section titled “Step 8: Verify the RoleBinding Was Created”
Terminal window
# Confirm jitsudo created the RoleBinding
kubectl get rolebindings -n jitsudo-sandbox
# Inspect it
kubectl describe rolebinding -n jitsudo-sandbox \
$(kubectl get rolebindings -n jitsudo-sandbox -o name | grep jitsudo)

You will see a RoleBinding created by jitsudo’s service account, binding your user identity to jitsudo-sandbox-viewer in the jitsudo-sandbox namespace.

Step 9: Execute with Real Kubernetes Credentials

Section titled “Step 9: Execute with Real Kubernetes Credentials”
Terminal window
# List pods in the sandbox namespace using the elevated binding
jitsudo exec req_01... -- kubectl get pods -n jitsudo-sandbox

The exec command injects KUBECONFIG pointing to a temporary kubeconfig that authenticates as your user identity. The RoleBinding grants get, list, and watch on pods, services, and configmaps in the sandbox namespace.

Terminal window
jitsudo audit --request req_01...

You should see request.created, request.approved, grant.issued. When the TTL expires and the sweeper runs, grant.revoked will appear and the RoleBinding will be deleted.

Terminal window
jitsudo revoke req_01...

Verify the RoleBinding is gone:

Terminal window
kubectl get rolebindings -n jitsudo-sandbox
# Should return: No resources found in jitsudo-sandbox namespace.

Verify access is revoked:

Terminal window
jitsudo exec req_01... -- kubectl get pods -n jitsudo-sandbox
# Should fail: Forbidden
Terminal window
kubectl delete namespace jitsudo-sandbox
kubectl delete clusterrole jitsudo-sandbox-viewer jitsudo-rbac-manager
kubectl delete clusterrolebinding jitsudo-rbac-manager
kubectl delete serviceaccount jitsudo-control-plane -n jitsudo-system
rm jitsudo-kubeconfig.yaml
  • See the full Kubernetes Provider guide for production configuration, including in-cluster service account auth, namespace restrictions, and RBAC design for multi-tenant clusters
  • See Security Hardening before deploying to production