Removed slave01 from cluster.

This commit is contained in:
savagebidoof 2023-07-30 17:57:12 +02:00
parent eb11ee49d5
commit c68c2c27f1
4 changed files with 165 additions and 14 deletions

View File

@ -16,4 +16,4 @@ spec:
- destination:
host: gitea.external.svc.cluster.local
port:
number: 443
number: 443

View File

@ -3,6 +3,8 @@ kind: DestinationRule
metadata:
name: jelly
namespace: external
labels:
app: jellyfin
spec:
host: jelly.external.svc.cluster.local
trafficPolicy:
@ -10,4 +12,5 @@ spec:
mode: SIMPLE
connectionPool:
http:
h2UpgradePolicy: UPGRADE
h2UpgradePolicy: DO_NOT_UPGRADE
# h2UpgradePolicy: UPGRADE

View File

@ -3,6 +3,8 @@ kind: ServiceEntry
metadata:
name: jelly-se
namespace: external
labels:
app: jellyfin
spec:
hosts:
- jelly.external.svc.cluster.local

View File

@ -931,6 +931,9 @@ istioctl install -y -f P2_Istio_Config/IstioOperator_IstioEgress.yaml
<pre><span style="color:#7F3FBF"></span> Egress gateways installed
<span style="color:#7F3FBF"></span> Installation complete </pre>
> **Note**:\
> The egress resource doesn't work "right off the bat", requires some configurations regarding this matter, don't think this will be done on this "walkthrough".
#### IstioOperator_LocalIngress.yaml
```shell
@ -1003,7 +1006,7 @@ virtualservice.networking.istio.io/tube-vs created
As per [cer-manager installation guide through `kubectl`](https://cert-manager.io/docs/installation/kubectl/), I ran the following command to install the CRDs:
```shell
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml```
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
```
> **Note**:\
@ -1283,8 +1286,161 @@ x-envoy-upstream-service-time: 2
> **Note:**\
> It's extremely possible that I set HTTP to HTTPS redirect also locally, still need to decide if there is any reason for which I would like to maintain the local HTTP traffic.
### Part 5
I decided to use a SSD with the Pi4 through a USB3.0 connector.
On the Orange Pi5 I intend to use an m.2 that I bought for it, yet need to try if it works correctly.
Therefore, I will remove 1 node from the current Kubernetes cluster and try there the m.2 that I got.
#### Remove 1 OrangePI Node
https://stackoverflow.com/questions/35757620/how-to-gracefully-remove-a-node-from-kubernetes
```shell
kubecl get nodes
```
```text
NAME STATUS ROLES AGE VERSION
masterk.filter.home Ready control-plane 114d v1.26.2
slave01.filter.home Ready <none> 114d v1.26.2
```
```shell
kubectl drain slave01.filter.home
```
```text
node/slave01.filter.home cordoned
error: unable to drain node "slave01.filter.home" due to error:[cannot delete Pods with local storage (use --delete-emptydir-data to override): default/duckdns-dpl-856cc44f65-79mtn, default/helloworld-6798765f88-ql26n, default/helloworld-nginx-6bb45bc45c-5z7r4, default/helloworld2-dc9cb5db6-m47x7, istio-system/egress-gw-c857495b4-zzlkp, istio-system/istio-local-ingress-778f675879-w9mdh, istio-system/istio-public-ingress-5bb994c8b7-j9hgr, cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-sv6sp, kube-system/kube-proxy-jpnkt, metallb-system/speaker-9p4fz], continuing command...
There are pending nodes to be drained:
slave01.filter.home
cannot delete Pods with local storage (use --delete-emptydir-data to override): default/duckdns-dpl-856cc44f65-79mtn, default/helloworld-6798765f88-ql26n, default/helloworld-nginx-6bb45bc45c-5z7r4, default/helloworld2-dc9cb5db6-m47x7, istio-system/egress-gw-c857495b4-zzlkp, istio-system/istio-local-ingress-778f675879-w9mdh, istio-system/istio-public-ingress-5bb994c8b7-j9hgr
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-sv6sp, kube-system/kube-proxy-jpnkt, metallb-system/speaker-9p4fz
```
```shell
kubectl drain slave01.filter.home --delete-emptydir-data --ignore-daemonsets
```
```text
node/slave01.filter.home already cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-sv6sp, kube-system/kube-proxy-jpnkt, metallb-system/speaker-9p4fz
evicting pod metallb-system/controller-bdf98b979-zqh7n
evicting pod default/helloworld-6798765f88-ql26n
evicting pod default/helloworld-nginx-6bb45bc45c-5z7r4
evicting pod default/helloworld2-dc9cb5db6-m47x7
evicting pod istio-system/istio-local-ingress-778f675879-w9mdh
evicting pod istio-system/egress-gw-c857495b4-zzlkp
evicting pod cert-manager/cert-manager-5f68c9c6dd-q7mfk
evicting pod cert-manager/cert-manager-webhook-5b7ffbdc98-g5z98
evicting pod cert-manager/cert-manager-cainjector-57d6fc9f7d-9c2mp
evicting pod default/duckdns-dpl-856cc44f65-79mtn
evicting pod istio-system/istio-public-ingress-5bb994c8b7-j9hgr
error when evicting pods/"istio-local-ingress-778f675879-w9mdh" -n "istio-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
error when evicting pods/"egress-gw-c857495b4-zzlkp" -n "istio-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
error when evicting pods/"istio-public-ingress-5bb994c8b7-j9hgr" -n "istio-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
```
Well, it's on it, so let's open a new shell.
#### Cannot evict pod as it would violate the pod's disruption budget.
Let's delete the "remaining" pods manually.
```shell
kubectl get pods -A -owide | grep slave
istio-system egress-gw-c857495b4-zzlkp 1/1 Running 0 25h 172.17.247.6 slave01.filter.home <none> <none>
istio-system istio-local-ingress-778f675879-w9mdh 1/1 Running 0 25h 172.17.247.10 slave01.filter.home <none> <none>
istio-system istio-public-ingress-5bb994c8b7-j9hgr 1/1 Running 0 25h 172.17.247.19 slave01.filter.home <none> <none>
kube-system calico-node-sv6sp 1/1 Running 6 (2d17h ago) 114d 192.168.1.11 slave01.filter.home <none> <none>
kube-system kube-proxy-jpnkt 1/1 Running 6 (2d17h ago) 114d 192.168.1.11 slave01.filter.home <none> <none>
metallb-system speaker-9p4fz 1/1 Running 9 (2d17h ago) 114d 192.168.1.11 slave01.filter.home <none> <none>
```
```shell
kubectl delete pod -n istio-system egress-gw-c857495b4-zzlkp
```
```text
pod "egress-gw-c857495b4-zzlkp" deleted
```
```shell
kubectl get pods -A -owide
```
````text
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
...
istio-system egress-gw-c857495b4-ll9jr 1/1 Running 0 2m15s 172.17.121.119 masterk.filter.home <none> <none>
...
````
Alright, it worked correctly, let's automate it a little.
```shell
kubectl get pods -A -owide | grep slave | awk '{ printf "kubectl delete pod -n %s %s\n",$1,$2 }'
```
```text
kubectl delete pod -n istio-system istio-local-ingress-778f675879-w9mdh
kubectl delete pod -n istio-system istio-public-ingress-5bb994c8b7-j9hgr
kubectl delete pod -n kube-system calico-node-sv6sp
kubectl delete pod -n kube-system kube-proxy-jpnkt
kubectl delete pod -n metallb-system speaker-9p4fz
```
```shell
kubectl get pods -A -owide | grep slave | awk '{ printf "kubectl delete pod -n %s %s\n",$1,$2 }' | bash
```
```text
pod "istio-local-ingress-778f675879-w9mdh" deleted
pod "istio-public-ingress-5bb994c8b7-j9hgr" deleted
pod "calico-node-sv6sp" deleted
pod "kube-proxy-jpnkt" deleted
pod "speaker-9p4fz" deleted
```
Well, finally things where drained, with a little of "help".
```text
pod/istio-local-ingress-778f675879-w9mdh evicted
pod/istio-public-ingress-5bb994c8b7-j9hgr evicted
node/slave01.filter.home drained
```
```shell
kubectl delete node slave01.filter.home
```
```text
node "slave01.filter.home" deleted
```
### Part 6
# I am here <----
```shell
```
---
@ -1295,17 +1451,6 @@ x-envoy-upstream-service-time: 2
@ -1533,3 +1678,4 @@ This left me some questions, especially **how many things can be affected by thi
As well the hassle of this issue, provided more reasons to use the `ACME DNS01` challenge instead of the `ACME HTTP01`, which will be configured as soon as available.
### Jellyfin and HTTP2 upgrading