So Kubernetes is pronounced /koo-ber-nay'-tace/ and means "sailing master"
- Node: A single virtual or physical machine in a Kubernetes cluster.
- Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
- Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
- Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model. Examples of a Cluster network include Overlays such as flannel or SDNs such as OVS.
- Service: A Kubernetes Service that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
What is a pod??
A pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context.
Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
In terms of Docker constructs, a pod is modelled as a group of Docker containers with shared namespaces and shared volumes.
- Helm Tiller: Helm is a package manager for Kubernetes and is required to install all the other applications. It is installed in its own pod inside the cluster which can run the
helmCLI in a safe environment.
- Ingress: Ingress can provide load balancing, SSL termination, and name-based virtual hosting. It acts as a web proxy for your applications and is useful if you want to use Auto DevOps or deploy your own web apps.
- Prometheus: Prometheus is an open-source monitoring and alerting system useful to supervise your deployed applications.
How to manage the Google Kubernetes Engine
The first step is to check the defined clusters [using the Google Cloud Platform console][gcp-console]. The most common target is, at first, to build and push a containarised application, which means that we start creating a new docker image in our machine, then after checking it locally, we push the image to a docker container registry.
Using GCP all these operations can be done by using a mix of
kubectl commands. First things first, we must create one or more images using the instructions provided here. After that, one can follow the instructions of this gcloud tutorial.
Setting a static IP and a custom domain for the application
Update the image used for the current Kubernetes POD
Build a new snapshot optimized for production
ng build --prod
Create a new image and tag it with a new version
Test the new image locally
Push the newly created image to the container registry
To update the deployed container we can use the set command:
kubectl set image deployment/together-rx ...
Problem: A frequent question that comes up on Slack and Stack Overflow is how to trigger an update to a Deployment/RS/RC when the image tag hasn't changed but the underlying image has.
- There is an existing Deployment with image
- User builds a new image
- User pushes
foo:latestto their registry
- User wants to do something here to tell the Deployment to pull the new image and do a rolling-update of existing pods
Possible solution for our current scenario: you are using
'latest' for testing (this is the "no sed" use case), in this case, downtime is fine, and indeed the right approach is likely to completely blow away your stack and redeploy from scratch to get a clean run (=>
kubectl set <...>).
Keep reading here.
Delete a Kubernetes service
To stop and delete a Kubernetes instance/service corresponding to the image we created and then exposed we should instruct Kubernetes to tell the Load Balancer to delete the provisioned service with a simple command like that:
kubectl delete service container-1
NOTE: The load balancer is deleted asynchronously in the background when you run
kubectl delete. Wait until the load balancer is deleted by watching the output of the following command:
gcloud compute forwarding-rules list