Most of the labs only works on the Kubernetes and Linux Not on EKS, AKS, GKE.
Lab 1: The “RESTful” Nature (Talking Directly to the Hub)
Concept: The text states the API Server is the “Central Hub” that speaks standard HTTP. This lab bypasses kubectl abstraction to see the raw JSON.
1: Start a Proxy: Open a terminal and run a proxy to handle authentication for you locally.
kubectl proxy --port=8080 &
2: Send a GET Request: act like a browser and query the API server directly.
curl http://localhost:8080/api/v1/pods
3: Analyze: You will see a massive JSON output. This proves the API server returns raw data (Desired State + Status) which kubectl usually formats for you.
Filter with jq (Optional):
curl -s http://localhost:8080/api/v1/pods | jq .items[].metadata.name
Lab 2: The “Listener” (The Watch Mechanism)
Concept: The text describes the API Server as “The Listener” that notifies components instantly rather than polling.
1: Terminal 1 (The Watcher): Run a command that establishes a long-lived connection.
kubectl get pods -w
2: Terminal 2 (The Actor): Create a new pod.
kubectl run watch-test --image=nginx
3: Observe Terminal 1: You will see the status change from Pending -> ContainerCreating -> Running instantly. This is the Watch API in action, pushing updates to the client.
Lab 3: The “Guard” Gate 1 (Authentication – Service Accounts)
Concept: The text highlights “Service Accounts (Robots)” as a key Identity Type.
1: Create a Service Account:
kubectl create sa robot-user
2: Create a Token (Modern approach):
kubectl create token robot-user
3: Test Access: Copy the token output and try to talk to the API server as this “Robot”.
# Replace <TOKEN> with your actual token
kubectl get pods --token=<TOKEN>
Result: It should likely succeed (if default permissions allow) or fail with “Forbidden” (if the cluster is secure), proving the API server identified the “Robot”.
Lab 4: The “Guard” Gate 1 (Authentication – X.509 Certificates)
1: Concept: The text details X.509 Certs as the standard for Admins. Let’s simulate creating a new human user, “Rajkumar”.
Generate a Private Key:
openssl genrsa -out rajkumar.key 2048
2: Generate a CSR (Certificate Signing Request): Note CN=rajkumar (User) and O=devs (Group).
openssl req -new -key rajkumar.key -out rajkumar.csr -subj "/CN=rajkumar/O=devs"
3: Send CSR to Kubernetes:
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: rajkumar
spec:
request: $(cat rajkumar.csr | base64 | tr -d "\n")
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF
4: Approve the CSR:
kubectl certificate approve rajkumar
5: Retrieve the Certificate:
kubectl get csr rajkumar -o jsonpath='{.status.certificate}' | base64 -d > rajkumar.crt
6: Create Kubeconfig Credential:
kubectl config set-credentials rajkumar --client-certificate=rajkumar.crt --client-key=rajkumar.key --embed-certs=true
You now have a valid X.509 user credential managed by the API Server.
Lab 5: The “Guard” Gate 2 (Authorization – RBAC)
Concept: The text states, “Once identified, the server checks if the user has permission.”
1: Test Access (Fail First): Use the user “Rajkumar” from Lab 4.
kubectl get pods --user=rajkumar
Expected Result: Error from server (Forbidden): pods is forbidden: User "rajkumar" cannot list resource "pods" (AuthN passed, AuthZ failed).
2: Create the Role (The Permission):
kubectl create role pod-reader --verb=list,get --resource=pods
3: Create the Binding (The Link):
kubectl create rolebinding rajkumar-pod-reader --role=pod-reader --user=rajkumar
Test Access (Success):
kubectl get pods --user=rajkumar
Expected Result: Success. The “Guard” now lets you pass.
Lab 6: The “Gatekeeper” (Admission Control)
Concept: The text mentions Admission Control as the final safety check (e.g., “Image pull policy”). We will use a LimitRange (a native admission plugin) to mutate a request.
Create a LimitRange: This tells the API server to mutate any pod that doesn’t have resource limits defined.
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
Create a Pod without limits:
kubectl run test-pod --image=nginx
Inspect the Pod:
kubectl get pod test-pod -o yaml | grep memory
Result: You will see limits: memory: 512Mi and requests: memory: 256Mi. The API Server (Admission Controller) intercepted your request and injected these values before saving to Etcd.
Lab 7: Extensibility (Custom Resource Definitions – CRDs)
Concept: The text states, “To the API Server, these look and act just like native Pods.”
1: Define a CRD:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: backups.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
scope: Namespaced
names:
plural: backups
singular: backup
kind: Backup
Save as crd.yaml and kubectl apply -f crd.yaml.
Create a Custom Resource:
apiVersion: stable.example.com/v1
kind: Backup
metadata:
name: my-new-backup
spec:
cronSpec: "* * * * */5"
Save as my-backup.yaml and apply it.
Verify:
kubectl get backups
Result: The API Server now serves your custom “Backup” object just like it serves Pods.
Lab 8: Observability (The Aggregation Layer)
Concept: The text mentions kubectl top uses the “Aggregation Layer” to proxy requests to an extension server (metrics-server).
Check Aggregation:
kubectl get apiservices
Look for: v1beta1.metrics.k8s.io. If this exists and says True, the aggregation layer is working.
Test the Proxy:
kubectl top pods
Explanation: The main API Server receives this request, looks up the metrics.k8s.io service, and forwards the packet to the metrics-server pod.
Lab 9: Troubleshooting (Etcd & Healthz)
Concept: The text calls Etcd “The Brain’s Memory” and mentions checking health.
1: Check API Health Verbose:
kubectl get --raw='/healthz?verbose'
Analyze Output: Look specifically for the [+]etcd line.
- Output:
[+]etcd ok - Context: This confirms the API Server can successfully read/write to the Etcd database. If this fails, the cluster is down.
Lab 10: API Priority & Fairness (The Scaler)
Concept: The text mentions “APF” prevents low-priority requests from starving critical ones.
1: View Flow Schemas:
kubectl get flowschemas
2: Inspect Priority Levels:
kubectl get prioritylevelconfigurations
3: Analyze: Notice categories like system-leader-election (high priority) versus workload-low (low priority). This visualizes the internal traffic shaping described in the text.
Lab 11: Troubleshooting (Expired Certs Simulation)
Concept: The text lists “Expired Certs” as a common issue.
1: Check Expiration (Kubeadm clusters only):
kubeadm certs check-expiration
Note: This command requires you to be logged into the control plane node via SSH.
2: Simulate Solution: The text says the solution is kubeadm certs renew all. (Do not run this unless you are on a sandbox cluster you own!). knowing where to find this info is the practice.