預設行為

imagePullPolicy 自動產生

對容器指定摘要 (Hash Code) => imagePullPolicy:IfNotPresent
指定 容器 latest =>imagePullPolicy:Always
沒有指定容器標籤 => imagePullPolicy:Always
為容器指定非 latest 標籤 => imagePullPolicy:IfNotPresent
如果後續更新了容器標籤 imagePullPolicy 不會更新

鏡像拉取

預設為串行
serializeImagePulls:false 可以設為並行
… 沒有上限數量
可以透過 v1.32 maxParallelImagePulls 限制上限
同一個 Pods 不會並行拉取內部的 image

重起策略

restartPolicy
Always (default)
OnFailure
Never
應用於 initContainer / SidecarContainer / ApplicationContainer
拿來看 api 啟動參數
ps -ef | grep kube-apiserver | grep admission-plugins
或是正常會在 /etc/kubernetes/manifests/kube-apiserver.yaml
pass node info to container
spec: containers: - name: mc-pod-1 image: nginx:1-alpine env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName
simple shell
while true; do date >> example.txt; sleep 1; done
The controller-manager is responsible for scaling up pods of a replicaset. If you inspect the control plane components in the kube-system namespace, you will see that the controller-manager is not running.
… 忘記那些對應甚麼了 只是要檢查的有點多 篇麻煩
kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig
--kubeconfig check config
K8S 資源分配
When available cpu or memory resources on the nodes reach their limit, Kubernetes will look for Pods that are using more resources than they requested. These will be the first candidates for termination. If some Pods containers have no resource requests/limits set, then by default those are considered to use more than requested. Kubernetes assigns Quality of Service classes to Pods based on the defined resources and limits.
當 資源不夠時 第一個砍沒有寫 request/limit
好 記得設 requests/limit
A good practice is to always set resource requests and limits. If you don't know the values your containers should have you can find this out using metric tools like Prometheus. You can also use kubectl top pod or even kubectl exec into the container and use top and similar tools
kubectl top node kubectl top pod
節點還沒join cluster 需要更新
apt update apt show kubectl -a | grep 1.34 apt install kubectl=1.34.1-1.1 kubelet=1.34.1-1.1
重啟 kubelet
service kubelet restart
加入 cluster
# controlplan kubeadm token create --print-join-command
in cluster container
"kubernetes.default" will be cluster api host
# with serviceaccount token will placed at TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"
but I done it with
kubectl get --raw "/api/v1/namespaces/{NAMESPACE}/secrets"