Kubernetes – kubectl config – admin configuration

The “kubectl config” config command allows you to do many thing, like setting credentials or changing the context.

You can get, for example, human readable version of the config file right in your console by running:

$ kubectl config view

You get:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://172.17.0.21:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

As you can see there´s CA certificate.

You can find the same information in full format in the file /etc/kubernetes/admin.conf, whose content is like:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ESXhOakUzTVRVeU9Gb1hEVE15TURJeE5ERTNNVFV5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzZXCmpUSC81bDQ5V3NXaUw0ZnN3T3VVS202NXBSZGZtMk4xMG0wVEY4cElnM2RycSsvVHlqZVA4dDEyc01QdGFkUFQKRWhqTFA4NlpzZmVsbWk4WjZsZEhEWHdlTHZNU3ZWT2ZXL0lMRU9BK281R2c4MnYwUEVUZXFhZXE0KzRMdEdhdgplQ3RmclRHVE1zRkE0c3NxSkxyV3BZRmNHR1Qyd0lpY0t6R3NRVjMrZFk0djd0UTVaN3B6V2Nac3BLUFBBS0lpCjFOQm4vTXJUejZnYkhJNkxlV0k0dCtjODdHcjBOL3krRkFWdzljSGh6SGhjNit4ekRIWXR3WHlyWi9TaVFkdk4KRXAvaGNTT2IzK010WDhQMFRRWnkyT0FKK0FKZk5YNFlKb0lGbXprTzVjVWRtakJyeEF5Yk1vd2ZNUzRmR0RhcApQWUsxM2Rsc0pjdnUzMURtcldzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHV0pTdmxiQ3IyUGNvYTgvaHJGeEVROWluZXkKRmw3ekVSMHo0bW1ocnFkbjU4OTM1UTZQOXNVbjlvUm1Wa212NlFvbWpZYnZuMnhIK1BRVEdRdmRoRWJLNVFaUAoyejc0OTEydHpTOUE3S0thcEpaYWJTZWozcm0xQ2xSckg5Tzc5M3JEYmxyZ1QrcVduRVR4RzZEK1k0QnFVODJiClUzL3MrbVVxQ2Q2allzMk12RURZRFpoYm1ORkJYY1BSUm85Tk9qdlZPdVhEU0RBOVFWMlp5VjhPT3FFY0htRzUKQ2d3YTg0OXMxdmdGNXRQR1ZoU1MxVnR2bHlVVjlPUDVzOVhQZnh3Vkx4Zk0rWUh5c093ajJJWXhMdlozZlpWcwpDaFdSb1E1T1JsdDBtN1FPU3F4TDlZZlUybytXSTViZWMwWVFIRDhNeUZEYnJNZG02YnZQdlh4R3lZaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.17.0.21:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJR2hadStiL3l0U293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBeU1UWXhOekUxTWpoYUZ3MHlNekF5TVRZeE56RTFNekJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5DWjJhTXdRRWhQV0U5ZVoKUWxkZmFGN2JXam5Ld1YzOXJQU2RjMW51cVhWK01Ld3VNZXFOZnFzM0lVb3M3QU5BQ1E5L1FXUFVFbHdQY1E2SQpLWWRYOGdOMHpXT0lXSEY1N3pIam9ENHJsN3M0OTNUN2NpZFRJTVArRXpYV0l4bnp4WURTZFYrUFRoL3Y1TVIvCngyRFpKT1laOUJlTkY5dVg5ZXQ1TUtSOFdmcjMwM1EwR3Y2MUdiUTlXeHB6bDRUK2ZlbUpZbzRSSDBGNUxVNmEKYi9TRUJKUTNkTDAwYnA0bGx0RFIxK2s5U0dsNGpoVFQvbmZHN0ZUZ0pXR0tNRnNBVnV3YndmTnlLejdiNVZDdgpiMmErU3hpRXJtbUVNdFlxbE5VRm1RTXI0OS9XL1RRSVBOM1ZYdEo5U1J2QW8xYStTdDdoU0FSLzRMNEgxdlYzCno1eUJhUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHVmg5SmlVdkRYODJGWWFHTi9DY3JuTURNUG9rdk5uWTBqYwpCUndTNisyWVZoWll0YWtmOHluQ3dBMDU1dmR3dGFRUEc0SEZEd1lwSVFPMU5ERVJVRDNXRnJON2x5dGN5SFJLCjAvTGJrMjZtUjE5WENTOEZmeHFQYStJTzRnNm8rejl3ajZwWUVjZ29nNDZxejA2WWJZcGxwa01ISWN6M3BiQzcKTGY0UlRFTUd1SFFyNTJ0N2NmY1FsZFZtdFp2Tkh4eWhUdnE0aTlYS0E0ekdmeEliNGNBOGowNHdhU3BkTXpaSQpIZW8zbW1ORno4d3VER08wZlpTUHNqeWE0TTZwNmNsWDRZR2c4Y3RrbnFZZXAreENYd0xKRjV3M1dDUWZNTjZ1CmlTVDZvQVgxakhOWGZJTkgvcXdIOEc3OThXd1lVS2xEV05FenoySTZGZTBnQnBoOUNnbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBbkNaMmFNd1FFaFBXRTllWlFsZGZhRjdiV2puS3dWMzlyUFNkYzFudXFYVitNS3d1Ck1lcU5mcXMzSVVvczdBTkFDUTkvUVdQVUVsd1BjUTZJS1lkWDhnTjB6V09JV0hGNTd6SGpvRDRybDdzNDkzVDcKY2lkVElNUCtFelhXSXhuenhZRFNkVitQVGgvdjVNUi94MkRaSk9ZWjlCZU5GOXVYOWV0NU1LUjhXZnIzMDNRMApHdjYxR2JROVd4cHpsNFQrZmVtSllvNFJIMEY1TFU2YWIvU0VCSlEzZEwwMGJwNGxsdERSMStrOVNHbDRqaFRUCi9uZkc3RlRnSldHS01Gc0FWdXdid2ZOeUt6N2I1VkN2YjJhK1N4aUVybW1FTXRZcWxOVUZtUU1yNDkvVy9UUUkKUE4zVlh0SjlTUnZBbzFhK1N0N2hTQVIvNEw0SDF2VjN6NXlCYVFJREFRQUJBb0lCQUJUT3QwYWRzcUtHRENmVwp1ZkJ5TGg5OVRHT0lnZkVrY2pKQzRseEZwcHprTUMyU05CR292TFFkWXBwSUJSWnFGSFZnb0VkNVl3QVRUbWQxClZVV2JhR2V5ZURVS1IxZnB2eUxWbGd4N1hGbTZ0QXVkaGdnZkgxK2xYa3ozQnBMazJXdi9ENHhlY0gwcm1rNzAKdEVaQUxJOVRMakF0bHpxUG8zODhvb2lOMHBnMnVOU1NNb2FYMlRCNTRWVmZwQktNV1Vnc3lLVjN1bjQ0a1p4TAoxbTBDUmYwbjh0cE9ka3JiOFU3OUd4dmxjeFZ0VGwzeHd5YjBpMjBHZjc5b2dRMlJKMmprUXlFZTNtSFpEZzRLCm5EM1A5TWJPblpHYnIwS25HUEdMQUU2aENlQmhjaFQrVkx6aml1YTRIT3hLT3BlTGJoN2Y2WG9hd2F0UC92RnoKbTFZaHV4RUNnWUVBd3pUdndFNFU4VlF2K3FzUzhoeTZ3T2g2UTV5L09hQXYvUWVQM0RMRDcwa3RheXFPNHEvOQo4S3krY0dUcnVRWGFxZ2RXb3B3QytjcUxHTmpqM1grMFlEK09YUnpEWjNwS0drdGU0dExHWFRMMHQydzd4S21lCnAzWFdPWUc3VGhhMlNnV3ZsNFByQVZYQUdGN1dIdmxCMjFnWksrSmpCUzVQbDRkKzY1TlliQ1VDZ1lFQXpNZXoKOERjeDhVSHNjc2piN3BUSGZYR2NaR1lISUVGanZzaEhVMTFFcVVURTdPRXJLa2RIL3VQMldvQnQyVTNHRWVIKwpjbExYT2hObHJYSk1lTlRSZ3pjdWZXR0FZSUI4c0VZZHRCZkU2cXdiSFNJRGdxdUlmVlBTS0FCZ1ZEQlNqQ21hCllBbEtUc1FRNk9BWWRrSEFoOFhHTWxWTHhZRXFDcHRQMUlDWVd2VUNnWUJCOUYzWkxNaVlyV2cwbXA2aEhTSGcKVWV3VmNoMk9HOWg2OXBZZHdTMUFhUW9tZmsrZXVmWGozb25Oc2R1UFBUSWZackZYNmJ4N3AzcU4zNkpGMFlNaApUZmdNNzgzNTVYQ3FKN01ja2ppaGZyaitURng1V3hJVzVKS0lCWlJDa3h2dU1MT2VUdDBWSzJMbkdRdmhsVGd5Ckl6Y0dBOHlQRUpHbDBnc0pEMVM5M1FLQmdDbnJsOEpOQ2tSQklvNDg0QWp2SEFkcUFvSHJBUHB4YUdIZHpmd2EKMDd0ekFIMGRleUNraHZCKzZpZU1zdkFSSThMMWFJa3V2REIvV0FreHBNWDNpd3c1ZDZMakdzeW1nTi8wK28rcgpiZnc4VHpDZmplaHFCUDdpOGRxT2VTbkMycU1td1J0M3kyWmtiTEcvYmtLSEkwcW9DZkt6MDFjVElmQVo2eld0ClBZTWhBb0dBRUZsYUc0VE9MdGV3YlZoM0NXOTdLdm8wVGxhRDRPaldEbzFldEUxcFJtelRrWEdrZGdTZlNYTWkKeXR2MEkvZGJYcG13NVVoc0xXaXJRSVlVM1VjaEhKeml4VTNXVVZwYis4THpYTCtCREsxWjA5RDl5WVpRZDZUbwp3NE05ZDAvY2txcGhJeVJJbU10WElUSlFSQVhsYkdIbnlod0pCY3VKVmY5eGxwNU5wbEk9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

Docker commands – Getting started

To run one of the Docker official image, you can simply use “docker run”:

$ docker run centos

The first time you run it, the image can´t be found and must be downloaded:

Once downloaded, the image is saved in the docker cache and can be reused if necessary.

If it´s an official image, you would only need to enter the image name only.

But it´s taken from a user´s repo the format is: <username>/<image>.

The 100% Free software linux image Trisquel, for example, is taken from the repository of a user called “kpengboy“.

In this case, even if you are logged in, you can´t run “docker run”. You need to pull it first:

$docker pull kpengboy/trisquelbash

To see the list of the images donwload so far, run:

$ docker images

If you also want to start a bash in the container, use “-it” for the interactive mode:

$ docker run -it kpengboy/trisquel

Then you can work with the bash shell:

As default, docker will download the latest version available, but you can additionally specify a version (called “tag”) you want to work with:

$ docker run redis:4.0

If you want to access a webapp or, for example, a database, you generally get an assigned url with a port number. But you can change the ports, by adding them as parameters.

For example, if you want to run mysql and access it at a different port than 3306, use the “-p” option to give <host>:<container-port>:

$ docker run  -p 52000:3307 mysql

To persist your data, even when the container is killed, you can mount a persistent volume with “-v”:

$ docker run -v /opt/data/mysql mysql

You can pass an environment variable to you container, which might help you to avoid modifying the image and achieve some strategic solution:

$ docker run -e BACKGROUND_COLOR simple-web-app

To see which containers are running:

$ docker ps

To see the stopped containers to

$ docker ps -a

If you want to get the details of a specific container:

$ docker inspect vibrant_chatelet

You can check the log by running:

$ docker logs vibrant_chatelet

You can start a container in background mode with the “-d” option:

$ docker -d kpengboy/trisquel

To remove a container (even if it´s running)

$ docker rm vibrant_chatelet

Or by containerID

$ docker rm 0d6d64f9053c

To remove the image you need to make sure that no container is running it first. You migh use “docker stop” or just remove it directly.

To remove an image you need:

$ docker rmi kpengboy/trisquel

Kubernetes – Exposing Services and Endpoints

To access pods you have three options

  • services (load balancing)
  • port forwarding (through localhost for development purpose)
  • ingress (network policy)

In this section we will inspect services and port forwarding.

SERVICES

To allow containers to communicate with each other and the outside world, Kubernetes allows you to expose pods as services.

Service are decoupled from deployment (they exist independently) and the only way to associate to a deployment is by using lables.

They can be used to access multiple deployment and are automatically “load balanced” by Kubernetes.

There are 3 types of services:

  • ClusterIp (for internal access)
  • NodePort (to allocate a specific port for external access)
  • LoadBalancer (for public cloud)
  • ExternalName (for DNS level redirection)
  • No Selector for direct IP/Port associations (for databases and namespaces)

To expose a deployment you can use ithe imperative way:

> kubectl expose deploy flask --port 5000

A cluster IP service manifest might look like the following:

apiVersion: v1
kind: Service
metadata:
  name: jupiter-crew-svc
  namespace: jupiter
...
spec:
  clusterIP: 10.3.245.70
  ports:
  - name: 8080-80
    port: 8080
    protocol: TCP
    targetPort: 80

  selector:
    id: jupiter-crew
  sessionAffinity: None
  type: ClusterIP

status:
  loadBalancer: {}




This would work with the following Endpoints definition:

kind: Endpoints
apiVersion: v1
metadata:
  name: some-remote-service
subsets:
  - addresses:
      - ip: 1.2.3.4
    ports:
      - port: 1976




NODEPORT

NodePort and Load balancer are used to expose the service outside the cluster
In addition to creating the ClusterIp, this will allocate a port in the range 30000-32767 on every node of the cluster, to route to the clusterIP.
For example:

$ kubectl expose deploy flask --port 5000 --type=NodePort

 The manifest looks like:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-10-14T18:19:07Z
  labels:
    run: flask
  name: flask
  namespace: default
  resourceVersion: "19788"
  selfLink: /api/v1/namespaces/default/services/flask
  uid: 2afdd3aa-b10c-11e7-b586-080027768e7d
spec:
  clusterIP: 10.0.0.39
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31501
    port: 5000
    protocol: TCP
    targetPort: 5000  
  selector:
    run: flask
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

If you don´t specify the target port, a a random port will be used.

Exposed services automatically register with the Kubernetes internal DNS, which make it easier to access them by names rather than IPs. You can get the url by using nslookup, like:

$ kubectl exec -it busybox2 --nslookup nginx

HEADLESS SERVICE

If you put in the spec “clusterIp” as “none”, you get a headless service.
It is possible to create a service grouping that does not allocate an IP address or forward traffic, if there is a reason that you want to definitively control what specific pods you connect and communicate with. This kind of service is called a headless service. You can request this setup by explicitly setting ClusterIP to None within the service definition:

For example, a headless service might be:

kind: Service
apiVersion: v1
metadata:
    name: flask-service
spec:
  ClusterIP: None
  selector:
      app: flask

For these kind of services, DNS entries will be created that point to the Pods backing the service, and that DNS will be automatically updated as Pods matching the selector come online (or disappear).

ENDPOINT

You can expose a remote system as a service internally by creating an endpoint for it.

For example, if you had a remote TCP service running on the internet at port 1976 at the IP address 1.2.3.4, you could define a Service and Endpoint to reference that external-to-kubernetes system:

kind: Service
apiVersion: v1
metadata:
name: some-remote-service
spec:
ports:
- protocol: TCP
port: 1976
targetPort: 1976

This would work with the following Endpoints definition:

kind: Endpoints
apiVersion: v1
metadata:
name: some-remote-service
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 1976

PORT FORWARDING

Now we can use that name to ask kubectl to set up a proxy that will forward all traffic from a local port we specify to a port associated with the Pod we determine.

$ kubectl port-forward flask-1599974757-b68pw 5000:5000

Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000

This is forwarding any and all traffic that gets created on your local machine at TCP port 5000 to TCP port 5000 on the Pod flask-1599974757-b68pw.

Kubernetes – Network policies

To access the pods indirectly you can use Network policies, which work like firewalls.

You can block or allow egress or ingress traffic to Pods.

The association with pods is made by using labels. They exposes HTTP and HTTPS routes,

EGRESS

If you want to restrict, for example, the outgoing TCP connections from a deployment, except a specific port (like UDP/TCP port 53 for DNS resolution), then you need to define an egress.

In the following example we also have another egress rule, that allow outgoing connection to pod having the “api” label on port 80 and 443:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np1
  namespace: venus
spec:
  podSelector:
    matchLabels:
      id: frontend          # label of the pods this policy should be applied on
  policyTypes:
  - Egress                  # we only want to control egress
  egress:
  - to:                     # 1st egress rule
    - podSelector:            # allow egress only to pods with api label
        matchLabels:
          id: api
     ports:
         ports:
        - port: 443
        - port: 80
        
  - ports:                  # 2nd egress rule
    - port: 53                # allow DNS UDP
      protocol: UDP
    - port: 53                # allow DNS TCP
      protocol: TCP

After creating the network policy, you can see that the pod selector is “frontend”

In the frontend pod you can call any website:

INGRESS

An ingress is like a virutal host and permits multiplex access to several microservices with a single load balancer. It can do load balancing itself and only works with Nodeport.

There are several ingress controllers, like nginx, Kong or Contour. You need to have one of them installed.

Let´s say we want to block traffic from FTP server, from A CIDR IP block on a certain port and from a namespace having as label “team=A”:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: untitled-policy
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: ftp
      ports:
        - port: 21
    - from:
        - namespaceSelector:
            matchLabels:
              team: A
    - from:
        - ipBlock:
            cidr: 10.2.1.3/32
      ports:
        - port: 443
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - port: 443
        - port: 80

As you can see with can put several Ingress and Egress rules in the same NetworkPolicy manifest, as you have “policyTypes” in the spec.

You can create Ingresses with imperative commands too, like:

$ kubectl create ingress my-webapp-ingress --rule="foo.bar/foo=service1:8080"

The following Ingress rules are possible:

  • optional host (otherwise all HTTP traffic)
  • list of paths (exposed as POSIX regular expressions)
  • backend (serviceName and servicePort)

You can often find a default single backend for incoming traffic that is not related to a specific path, but you can refine your backend configuration with backend types:

  • simple fanout (for multiple backends, to minimize the number of load balancers)
  • name based virtual hosting (to match to a specific service)
  • TLS ingress (to use a TLS secret)

Kubernetes – Storage options – ConfigMap and Secrets

As the pods might need the same configuration variables or credentials, you can store them in a single location by using a ConfigMap or a SecretMap.

This mechanism is called “configuration injection”.

You can create a config map the imperative way and specify the source, that can be:

  • from literal
  • from file
  • from env-file

For example:

$ kubectl create configmap cm-db --from-literal=DB_NAME=mydb --from-literal=DBHOST=mydbsite.net

You can generate if from a file, like:

$ kubectl create configmap game-config --from-file=/opt/config-maps/my-map.txt

The file “my-map.txt” might look like:

# Environment files use a new line for each variable.
KIAMOL_CHAPTER=ch04
KIAMOL_SECTION=ch04-4.1
KIAMOL_EXERCISE=try it now

You can as well define a map directly in a yaml file, in the declarative way. For example, to use the file “nginx.conf”:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config
data:
  nginx.conf: |
    server {
      location / {
        root /data/www;
      }
      location /images/ {
        root /data;
      }
    }

Or simply key-value entries like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: random-generator-config
data:
  PATTERN: "logging"
  EXTRA_OPTIONS: "high-secure,native"
  SEED: "432576345"


You can use the alias “cm” as well. For example, to get the details:

$ kubectl get cm my-cf-map
 

The config map needs to exist before you deploy the pod.

You can add an environment variable from a config map for a specific container, like:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    env:
    - name: PATTERN
      valueFrom:
        configMapKeyRef:                   
          name: random-generator-config
          key: pattern

Or even add all the variables by a prefix for all the containers with “envFrom”:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
    envFrom:                              
    - configMapRef:
        name: random-generator-config
      prefix: CONFIG_

You can also mount a config map as a volume:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    volumeMounts:
    - name: config-volume
      mountPath: /config
  volumes:
  - name: config-volume
    configMap:                            
      name: random-generator-config

This ways the volume will map each entry with a file.

SECRETS
Secrets have a similar API to config maps, but they are managed by Kubernetes in a different way: they are stored in memory rather than disk and encrypted both when in transit and at rest.

There are three types of secret:

  • docker-registry
  • TLS
  • generic

Assuming you want to put your username and password in two different text files, called “username.txt” and “password.txt” respectively, you can create a secret like the folliwng:

$ kubectl create secret generic database-creds --from-file=username.txt --from-file=password.txt

$ kubectl create secret generic passwords --from-literal=password=foobar

Notice that you need to include “generic” in the command.

You create tls and docker-registry entries in the same way:

$ kubectl create secret tls demo-tls --key "auth.key" --cert "auth.cer" -n foo

$ kubectl create secret docker-registry gcr-pull-key --docker-server=gcr.io --docker-username=_json_key --docker-password="$(cat gke_key.json)" --docker-email=xyz@slb.com

In the manifest file, you can reference secrets:


apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    env:
    - name: LOG_FILE
      value: /tmp/random.log                   
    - name: PATTERN
      valueFrom:
        configMapKeyRef:                       
          name: random-generator-config        
          key: pattern                         
    - name: SEED
      valueFrom:
        secretKeyRef:                          
          name: random-generator-secret
          key: seed

Just like for the config mpa, you can import all the keys an env variables at once, like:

  containers:
  - name: secret-handler
...
    envFrom:
    - secretRef:     
        name: secret1

Besides using the “secretKeyRef” parameter, you can reference a file in a volume and then specify the secret name in the “volumes” section.

For example,:

spec:
  containers:
    - name: db
      image: postgres:11.6-alpine
      env:
      - name: POSTGRES_PASSWORD_FILE       # Sets the path to the file
        value: /secrets/postgres_password
      volumeMounts:                        # Mounts a Secret volume
        - name: secret                     # Names the volume
          mountPath: "/secrets"            
  volumes:
    - name: secret
      secret:                             # Volume loaded from a Secret 
        secretName: todo-db-secret-test   # Secret name
        defaultMode: 0400                 # Permissions to set for files
        items:                            # Optionally names the data items 
        - key: POSTGRES_PASSWORD  
          path: postgres_password


Credentials get encoded in base64. So you can basically read them in clear text, by running:

$ echo <secret-string> | base64 -d

This means that it´s better to use a specific tools if you want to achieve more security!

As secrets can be accessed by whoever has access to the cluster and decoded (i.e. read in clear text!), they cannot be considered a good productive solution. In other words, you should better look around for a specific commercial tool like Hashicorp Vault. 

Kubernetes – Storage options – Volumes and claims

In Kubernetes, Pods use an ephemeral local storage strategy as default. But if you need to make your date outlive the pod or share it across workloads, you can define a volume in the pod specification, that can be shared among containers.

If you need to retain the data in case the pods are killed, you need a persistent volume, that refers to an external storage.

Several types of volumes are supported by Kubernetes:

  • emptyDir
  • hostPath
  • azureDisk (for Microsoft Azure)
  • awsElastickBlockstore (for Amazon Web Services)
  • gcePersistenceDisk (for Google cloud)
  • etc.

Emptydir

The type “emptyDir” can be used in a pod specification to create and empty directory and mount it to one or more containers. This gives you the opportunity to vastly create an ephemeral file system to use in your pod.

You can specify a volume in the spec sectuin as “volumes” and the add a “volumeMount” in each container section:

spec:
   containers:
  - image: g1g1/hue-global-listener:1.0
    name: hue-global-listener
    volumeMounts:
    - mountPath: /notifications
      name: shared-volume
  - image: g1g1/hue-job-scheduler:1,0
    name: hue-job-scheduler
    volumeMounts:
    - mountPath: /incoming
      name: shared-volume
  volumes:
  - name: shared-volume
    emptyDir: 
       medium: Memory

HostPath

Sometimes, you want your pods to get access to some host information (for example, the Docker daemon) or you want pods on the same node to communicate with each other. That´s why you might need HostPath volumes.

HostPath volumes persist on the original node and are intended or intra-node communication.

If a pod is restarted on a different node, it can’t access the HostPath volume from its previous node.

The containers that access host directories must have a security context with privileged set to true or, on the host side, you need to change the permissions to allow writing.

An example of of HostPath volume used by a container.

apiVersion: v1
kind: Pod
metadata:
  name: hue-coupon-hunter
spec:
  containers:
  - image: busybox    
    name: hue-coupon-hunter
    volumeMounts:
    - mountPath: /coupons
      name: coupons-volume
    securityContext:
      privileged: true
  volumes:
  - name: coupons-volume
    host-path:
        path: /etc/hue/data/coupons

The “securityContext” section is mandatatory to make it work!

Local volumes

Local volumes are similar to HostPath, but they persist across pod restarts and node restarts.

The purpose of local volumes is to support StatefulSets where specific pods need to be scheduled on nodes that contain specific storage volumes.

We need to define a storage class for using local volumes. For example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
$ kubectl create -f local-storage-class.yaml
storageclass.storage.k8s.io/local-storage created

Now, we can create a persistent volume using the storage class that will persist even after the pod that’s using it is terminated:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
  labels:
    release: stable
    capacity: 1Gi
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/disk-1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - minikube

As we are defining it as a local volume, we can´t omit the nodeAffinity.

To assign a volume to a Container, you need to define a persistence volume claim, that claims the same storage size.

Persistence volume claims represent a request of storage, by specifying the size and the type of volume you need.

For example:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-pvc
spec:
  storageClassName: local-storage
  volumeName: local-pv
  accessModes:
   - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Once the PVC is bound to a volume, the volume is available to the pods.

Run “kubectl get pv,pvc” and make sure that the pvc is bound:

Kubernetes looks for a pv that matches the volume claim requirements.

Once you have a persistence volume claim, you can add the dependency of the persistence volume in your Pod:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    volumeMounts:
    - mountPath: "/logs"
      name: log-volume
  volumes:
  - name: log-volume
    persistentVolumeClaim:                 
      claimName: local-pvc

If storageClassName is not specified in the PVC, the default storage class will be used for provisioning.

For more infos check:

https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/

Kubernetes – Using Helm

The helm repo command helps you manage the repositories you have access to. After installation, you can see that you do not have access to any repository:

$ helm repo list

Error: no repositories to show

You can add a new repository with the command helm repo add, for example:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

"bitnami" has been added to your repositories

If you check the list again, you can see the repo:

$ helm repo list

NAME     URL

Bitnami  https://charts.bitnami.com/bitnami

Now, you can install a chart from this repository:

$ helm install my-wp-install bitnami/wordpress[...]

Later, when a new version is released, you can upgrade your installation with the helm upgrade command 

$ helm upgrade my-wp-install bitnami/wordpress

Release "my-wp-install" has been upgraded.

Happy Helming![...]

You can manage the history of the deployed revisions with helm history

$ helm history my-wp-install

If necessary, you can roll back to a previous release:

$ helm rollback my-wp-install

You can uninstall the package with the helm uninstall command:

$ helm uninstall my-wp-install

GETTING READY FOR THE CKAD EXAM

If you take the CKAD exam, you might be asked to update an existing repo. It means that, before upgrading your installation, you should run:

$ helm repo update

Helm can be used as alternative to kubectl create. You can specify options for your deployments, like the replicaCount:

$ Helm -n mercury install internal-issue-report-apache bitnami/apache –-set replicaCount=2 –-set image.debug=true

Kubernetes – Health checks with liveness and readiness probes

Container probes allows you to test the health of your pods through a generic mechanism, that you can configure and execute according to your needs.

There are three types of probes:

  • exec – a command executing, expected to return 0 as exit value 
  • httpGet – a request expected to return 200
  • tcpSocket – for successful connectivity

And two categories:

  • liveness probe  – to check the health of the running application
  • readiness probe – to check if the application is ready to start working

Liveness probe

If you define a liveness probe, you basically will add a command to check if the pod is working as expected. If not, a restartPolicy can be defined.

The three types of  liveness probes are:

  • ExecAction: a command within the Pod to get a response. A result of anything other than 0 represents a failure.
  • TCPSocketAction: it consists of trying to open a socket, without doing anything else. If the socket opens, the probe is successful, and if it fails (after a timeout), the probe fails.
  • HTTPGetAction: Similar to the socket option, this makes an HTTP connection to your Pod as a URI specified, and the response code of the HTTP request is what is used to determine the success/failure of the liveness probe.

The restart Policy can have the following values

  • Always (default)
  • OnFailure
  • Never

The restartPolicy can be found under spec.template.spec:

apiVersion: batch/v1
kind: Deployment
metadata:
 name: my-dep
spec: template: metadata: name: my-dep-template
  spec: containers: - name: my-container
      image: busybox command: ["echo", "this is my container!"] restartPolicy: OnFailure

Kubernetes keeps track about how often a restart occurs and will slow down the frequency of restarts if they are happening in quick succession, capped at a maximum of five minutes between restart attempts.

The number of restarts can be inspected in the  “restartcount" in the output of kubectl describe, and also in the yaml file in the status section.

If you want to inspect the fields you can configure under “livenessProbe you can run:

$ k explain deployment.spec.template.spec.containers.livenessProbe

If you are using an HTTP-based probe, you have a number of additional variables that can be defined while making the HTTP request:

  • host: Defaults to the Pod IP address.
  • scheme: HTTP or https. Kubernetes 1.8 defaults to HTTP
  • path: Path of the URI request.
  • HttpHeaders: Any custom headers to be included in the request.
  • port: Port on which to make the HTTP request.

So a httpGet liveness probe might loook like

livenessProbe:
        httpGet:
          path: /checkalive
          port: 8888
          httpHeaders:
          - name: X-Custom-Header
            value: ItsAlive
        initialDelaySeconds: 30
        timeoutSeconds: 10

Readiness probe

Readiness probes are generally necessary if your container depends on other things that might be unavailable at the beginning (like a database service not started yet).

When a readiness probe fails for a container, the container’s pod will be removed from any service endpoint it is registered with.

Here is a sample readiness probe with the “exec” command:

readinessProbe:
  exec:
    command:
        - /usr/local/bin/checkdb
        - --full-check
        - --data-service=my-data-service
  initialDelaySeconds: 60
  timeoutSeconds: 5

PUTTING THE PROBES IN THE MANIFEST

It is fine to have both a readiness probe and a liveness probe on the same container as they serve different purposes.

You can add them under spec.containers: 

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: flask
  labels:
  run: flask
spec:
  template:
    metadata:
      labels:
        app: flask
    spec:
      containers:
      - name: flask
        image: quay.io/kubernetes-for-developers/flask:0.3.0
        imagePullPolicy: Always
        ports:
        - containerPort: 5000
        envFrom:
        - configMapRef:
          name: flask-config
        volumeMounts:
        - name: config
          mountPath: /etc/flask-config
          readOnly: true
        livenessProbe:
          httpGet:
            path: /alive
            port: 5000
          initialDelaySeconds: 1
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 5000
          initialDelaySeconds: 5
          periodSeconds: 5
        volumes:
        - name: config
          configMap:
            name: flask-config

Kuberneting – Set, Rollout, apply… update!

The imperative approach allows you to update things faster.

In particular, the following commands:

  • apply
  • scale
  • edit
  • set
  • rollout

APPLY

If you edi the yaml configuration file, you can update your resources by running:

$ kubectl apply -f yourfile.ymal

SCALE

To change the amount of replicas:

$ kubectl scale --replicas 3 deployment webapp

Kubernetes will rearrange the pods immediately. You will see pods being terminated and recreated right away,by running such command.


EDIT

$ kubectl edit deployment/nginx-deployment

SET

The set command is used for frequent little updates.

For example a new imapge version for the pods:

$ kubectl set image deployment <deployment-name> <container-name>=nginx:1.12 --record

If you omit “–record” the change-cause description will not be filled

To set resources:

$ kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi

This will apply to all the containers.

Set environmental variables in serval ways:
$ kubectl set env deployment nginx --env VAR1=value1
$ kubectl set env deployment nginx --from=configmap/vars --keys="var1,var2"
$ kubectl set env deployment nginx --from=secret/passwords --keys="pass1

Afterwards, check the pods have the environmental variables:
$ kubectl exec -it <podname> bash -- -c "env | grep <varname>"

This will happen to your live resource. After running this command, the image will be updated

ROLLOUT

The kubectl rollout command applies to a deployment. For example, to check the status:

check status:
$ kubectl rollout status deployment/test-deploy

To get the history of the deployment:

$ kubectl rollout history deployment flask

deployments "flask"
REVISION  CHANGE-CAUSE
1         initial deployment
2         kubectl set init container log-sidecar
3         deploying image 0.1.1
4         deploying image 0.2.0

If you annotate your deployment you will see the infomarion in the CHANGE-CAUSE column.

To rollback the deployment to the previous version:

$ kubectl rollout undo deployment/flask

To roll it back to a specific revision:

$ kubectl rollout undo deployment/flask --to-revision=2

Sometimes you don´t want to wait for the deployment to be recreated and you want to trigger it yourself. you can do it like:

$ kubectl rollout restart deploy my-deploy

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Privacy Policy

What information do we collect?

We collect information from you when you register on our site or place an order. When ordering or registering on our site, as appropriate, you may be asked to enter your: name, e-mail address or mailing address.

What do we use your information for?

Any of the information we collect from you may be used in one of the following ways: To personalize your experience (your information helps us to better respond to your individual needs) To improve our website (we continually strive to improve our website offerings based on the information and feedback we receive from you) To improve customer service (your information helps us to more effectively respond to your customer service requests and support needs) To process transactions Your information, whether public or private, will not be sold, exchanged, transferred, or given to any other company for any reason whatsoever, without your consent, other than for the express purpose of delivering the purchased product or service requested. To administer a contest, promotion, survey or other site feature To send periodic emails The email address you provide for order processing, will only be used to send you information and updates pertaining to your order.

How do we protect your information?

We implement a variety of security measures to maintain the safety of your personal information when you place an order or enter, submit, or access your personal information. We offer the use of a secure server. All supplied sensitive/credit information is transmitted via Secure Socket Layer (SSL) technology and then encrypted into our Payment gateway providers database only to be accessible by those authorized with special access rights to such systems, and are required to?keep the information confidential. After a transaction, your private information (credit cards, social security numbers, financials, etc.) will not be kept on file for more than 60 days.

Do we use cookies?

Yes (Cookies are small files that a site or its service provider transfers to your computers hard drive through your Web browser (if you allow) that enables the sites or service providers systems to recognize your browser and capture and remember certain information We use cookies to help us remember and process the items in your shopping cart, understand and save your preferences for future visits, keep track of advertisements and compile aggregate data about site traffic and site interaction so that we can offer better site experiences and tools in the future. We may contract with third-party service providers to assist us in better understanding our site visitors. These service providers are not permitted to use the information collected on our behalf except to help us conduct and improve our business. If you prefer, you can choose to have your computer warn you each time a cookie is being sent, or you can choose to turn off all cookies via your browser settings. Like most websites, if you turn your cookies off, some of our services may not function properly. However, you can still place orders by contacting customer service. Google Analytics We use Google Analytics on our sites for anonymous reporting of site usage and for advertising on the site. If you would like to opt-out of Google Analytics monitoring your behaviour on our sites please use this link (https://tools.google.com/dlpage/gaoptout/)

Do we disclose any information to outside parties?

We do not sell, trade, or otherwise transfer to outside parties your personally identifiable information. This does not include trusted third parties who assist us in operating our website, conducting our business, or servicing you, so long as those parties agree to keep this information confidential. We may also release your information when we believe release is appropriate to comply with the law, enforce our site policies, or protect ours or others rights, property, or safety. However, non-personally identifiable visitor information may be provided to other parties for marketing, advertising, or other uses.

Registration

The minimum information we need to register you is your name, email address and a password. We will ask you more questions for different services, including sales promotions. Unless we say otherwise, you have to answer all the registration questions. We may also ask some other, voluntary questions during registration for certain services (for example, professional networks) so we can gain a clearer understanding of who you are. This also allows us to personalise services for you. To assist us in our marketing, in addition to the data that you provide to us if you register, we may also obtain data from trusted third parties to help us understand what you might be interested in. This ‘profiling’ information is produced from a variety of sources, including publicly available data (such as the electoral roll) or from sources such as surveys and polls where you have given your permission for your data to be shared. You can choose not to have such data shared with the Guardian from these sources by logging into your account and changing the settings in the privacy section. After you have registered, and with your permission, we may send you emails we think may interest you. Newsletters may be personalised based on what you have been reading on theguardian.com. At any time you can decide not to receive these emails and will be able to ‘unsubscribe’. Logging in using social networking credentials If you log-in to our sites using a Facebook log-in, you are granting permission to Facebook to share your user details with us. This will include your name, email address, date of birth and location which will then be used to form a Guardian identity. You can also use your picture from Facebook as part of your profile. This will also allow us and Facebook to share your, networks, user ID and any other information you choose to share according to your Facebook account settings. If you remove the Guardian app from your Facebook settings, we will no longer have access to this information. If you log-in to our sites using a Google log-in, you grant permission to Google to share your user details with us. This will include your name, email address, date of birth, sex and location which we will then use to form a Guardian identity. You may use your picture from Google as part of your profile. This also allows us to share your networks, user ID and any other information you choose to share according to your Google account settings. If you remove the Guardian from your Google settings, we will no longer have access to this information. If you log-in to our sites using a twitter log-in, we receive your avatar (the small picture that appears next to your tweets) and twitter username.

Children’s Online Privacy Protection Act Compliance

We are in compliance with the requirements of COPPA (Childrens Online Privacy Protection Act), we do not collect any information from anyone under 13 years of age. Our website, products and services are all directed to people who are at least 13 years old or older.

Updating your personal information

We offer a ‘My details’ page (also known as Dashboard), where you can update your personal information at any time, and change your marketing preferences. You can get to this page from most pages on the site – simply click on the ‘My details’ link at the top of the screen when you are signed in.

Online Privacy Policy Only

This online privacy policy applies only to information collected through our website and not to information collected offline.

Your Consent

By using our site, you consent to our privacy policy.

Changes to our Privacy Policy

If we decide to change our privacy policy, we will post those changes on this page.
Save settings
Cookies settings