ETCD
- A distributed key-value store
- Is a service that listens on 2379 by default
ETCD in Kubernetes
- Kubernetes stores information in the ETCD server - Update information in the ETCD server is the last action when you make any update
 
- Deployed from Scratch - ETCD will be a independent service on the host
 
- Deploy from Kubeadm tool - ETCD will be a pod called etcd-masterunder thekube-systemnamespace1 
 2
 3
 4# Get pod's shell 
 kubectl exec etcd-master -n kube-system
 # Get all keys
 etdectl get / --prefix -key
 
- ETCD will be a pod called 
- ETCD in HA Environment - set controller-{i}=https://{CONTROLLER_IP}:2380toinitial-clusterparameter
 
- set 
Kube-Apiserver
- Authenticating and validating requests, retrieving and updating data in ETCD
- The other components such as the scheduler, kube-controller-manager, kubelet use kube-apiserver to perform updates in the cluster in their respective areas
- Configurations- Kubeadm pod: kube-apiserver-master
- Kubeadm config: /etc/kubernetes/manifests/kube-apiserver.yaml
- Check config: ps -aux | grep kube-apiserver
 
Kube-Controller-Manager
- Controller-Manager- A service contains all of the necessary controllers
 
- Node-Controller- Continually checks the status of nodes. When a node becomes unreachable, node-controller will eventually recreate the pods onto healthy nodes
 
- Replication-Controller- Continually monitors the replica sets, and ensures that the desired number of pods are available
 
- Configurations- Kubeadm pod: kube-controller-manager-master
- Kubeadm config: /etc/kubernetes/manifests/kube-apiserver.yaml
- Scratch config: /etc/systemd/system/kube-controller-manager.service
- Check config: ps -aux | grep kube-controller-manager
 
Kube-Scheduler
- Assign the best node to create a pod to satisfy the requirement (CPU, memory , etc.)
- Judgement processing- Filter nodes
- Rank nodes
 
- Configurations- Kubeadm pod: kube-scheduler
- Kubeadm config: /etc/kubernetes/manifests/kube-scheduler.yaml
- Check config: ps -aux | grep kube-scheduler
 
Kubelet
- At worker nodes- Register node
- Create pods
- Monitor node & pods
 
- Configurations- You must manually install it onto worker nodes
- Check config: ps -aux | grep kubelet
 
Kube-Proxy
- A process that runs on each node
- When a new service is created, kube-proxy will create the appropriate rules to forward traffic to those services to the backend pod
- Configurations- Kubeadm pods: kube-proxy-xxxxx (as daemonset)
 
Recap
Pods (po)
| 1 | # Generate yaml from command | 
| 1 | apiVersion: v1 | 
ReplicaSets (rs)
| 1 | apiVersion: apps/v1 | 
Deployments (deploy)
| 1 | # Generate yaml from command | 
| 1 | # Similar with ReplicaSet | 
Services (svc)
- NodePort, ClusterIP, LoadBalancer
- Service is Cross-node1 
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13# NodePort 
 apiVersion: v1
 kind: Service
 metadata:
 name: my-service
 spec:
 type: NodePort
 ports:
 - targetPort: 80
 port: 80
 nodePort: 30080
 selector:
 app: my-app
| 1 | # ClusterIP | 
Namespaces (ns / -n)
| 1 | apiVersion: v1 | 
| 1 | # Use namespace | 
- Access a service in a different namespace: {service}.{namespace}.svc.cluster.local
ResourceQuotas (quota)
- Set quota for a namespace1 
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14apiVersion: v1 
 kind: ResourceQuota
 metadata:
 name: dev-quota
 namespace: dev
 spec:
 hard:
 pods: 10
 requests:
 memory: 2Gi
 cpu: 4
 limits:
 memory: 4Gi
 cpu: 8
