A pod is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a pod are scheduled on the same node.

To launch a pod using the container image quay.io/openshiftlabs/simpleservice:0.5.0 and exposing a HTTP API on port 9876, execute:

kubectl run sise --image=quay.io/openshiftlabs/simpleservice:0.5.0 --port=9876
pod/sise created

Note: Deprecation Warning! Older releases of kubectl will produce a deployment resource as the result of the provided kubectl run example, while newer releases produce a single pod resource. The example commands in this section should still work (assuming you substitute your own pod name) - but you’ll need to run kubectl delete deployment sise at the end of this section to clean up.

Check to see if the pod is running:

kubectl get pods
NAME    READY     STATUS    RESTARTS   AGE
sise    1/1       Running   0          1m

If the above output returns a longer pod name, make sure to use it in the following examples (in place of sise)

This container image happens to include a copy of curl, which provides an additional way to verify that the primary webservice process is responding (over the local net at least):

kubectl exec sise -t -- curl -s localhost:9876/info
{"host": "localhost:9876", "version": "0.5.0", "from": "127.0.0.1"}

From within the cluster (e.g. via kubectl exec or oc rsh) this pod will also be directly accessible via it’s associated pod IP 172.17.0.3

kubectl describe pod sise | grep IP:
IP:                     172.17.0.3

The kubernetes proxy API provides an additional opportunity to make external connections to pods within the cluster using curl:

export K8S_API="https://$(kubectl config get-clusters | tail -n 1)"
export API_TOKEN="$(kubectl config view -o jsonpath={.users[-1].user.token})"
export NAMESPACE="default"
export PODNAME="sise"
curl -s -k -H"Authorization: Bearer $API_TOKEN" \
$K8S_API/api/v1/namespaces/$NAMESPACE/pods/$PODNAME/proxy/info

Cleanup:

kubectl delete pod,deployment sise

Using configuration file

You can also create a pod from a configuration file. In this case the pod is running the already known simpleservice image from above along with a generic CentOS container:

kubectl apply -f https://raw.githubusercontent.com/openshift-evangelists/kbe/main/specs/pods/pod.yaml
pod/twocontainers created
kubectl get pods
NAME                      READY     STATUS    RESTARTS   AGE
twocontainers             2/2       Running   0          7s

Containers that share a pod are able to communicate using local networking.

This example demonstrates how to exec into a sidecar shell container to access and inspect the sise container via localhost:

kubectl exec twocontainers -t -- curl -s localhost:9876/info
{"host": "localhost:9876", "version": "0.5.0", "from": "127.0.0.1"}

Define the resources attribute to influence how much CPU and/or RAM a container in a pod can use (here: 64MB of RAM and 0.5 CPUs):

kubectl create -f https://raw.githubusercontent.com/openshift-evangelists/kbe/main/specs/pods/constraint-pod.yaml
pod/constraintpod created
kubectl describe pod constraintpod
...
Containers:
  sise:
    ...
    Limits:
      cpu:      500m
      memory:   64Mi
    Requests:
      cpu:      500m
      memory:   64Mi
...

Learn more about resource constraints in Kubernetes via the docs here and here.

To clean up and remove all the remaining pods, try:

kubectl delete pod twocontainers
kubectl delete pod constraintpod

To sum up, launching one or more containers (together) in Kubernetes is simple, however doing it directly as shown above comes with a serious limitation: you have to manually take care of keeping them running in case of a failure. A better way to supervise pods is to use deployments, giving you much more control over the life cycle, including rolling out a new version.

Next