diff --git a/Migrations/Forget_Traefik_2023/README.md b/Migrations/Forget_Traefik_2023/README.md index bd6b2ee..a331a62 100644 --- a/Migrations/Forget_Traefik_2023/README.md +++ b/Migrations/Forget_Traefik_2023/README.md @@ -234,6 +234,8 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r ### Part 7 +> Completed 02/August/2023 + - [x] Remove the last host from the old kubernetes cluster and join it to the new cluster. ### Part 8 @@ -244,6 +246,9 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r - [ ] Deploy Istio security. +> **Note:**\ +> If there is barely workloads, don't think that it's wise to deploy a restrictive security settings as per the moment. + ### Part 10 - [ ] Update the `Current Setup` documentation with the new container and architecture rearrangement. @@ -1292,7 +1297,7 @@ x-envoy-upstream-service-time: 2 > It's extremely possible that I set HTTP to HTTPS redirect also locally, still need to decide if there is any reason for which I would like to maintain the local HTTP traffic. -### Part 5 +## Part 5 I decided to use a SSD with the Pi4 through a USB3.0 connector. @@ -1300,7 +1305,7 @@ On the Orange Pi5 I intend to use an m.2 that I bought for it, yet need to try i Therefore, I will remove 1 node from the current Kubernetes cluster and try there the m.2 that I got. -#### Remove 1 OrangePI Node +### Remove 1 OrangePI Node https://stackoverflow.com/questions/35757620/how-to-gracefully-remove-a-node-from-kubernetes @@ -1354,7 +1359,7 @@ error when evicting pods/"istio-public-ingress-5bb994c8b7-j9hgr" -n "istio-syste Well, it's on it, so let's open a new shell. -#### Cannot evict pod as it would violate the pod's disruption budget. +### Cannot evict pod as it would violate the pod's disruption budget. Let's delete the "remaining" pods manually. @@ -1431,7 +1436,7 @@ kubectl delete node slave01.filter.home node "slave01.filter.home" deleted ``` -### Part 6 +## Part 6 Currently, the resources whereas: @@ -1450,17 +1455,17 @@ I will change it to ``` -#### Prepare NVME images etc. +### Prepare NVME images etc. Did the needful. -#### Set up cluster +### Set up cluster Used the following ansible script to set up things arround. https://gitea.filterhome.xyz/ofilter/ansible_kubernetes_cluster -##### run.sh +#### run.sh
✗ ./run.sh @@ -1756,7 +1761,7 @@ PLAY RECAP ********************************************************************* slave02.filter.home : ok=12 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0-#### Check cluster status +### Check cluster status ```shell @@ -1781,9 +1786,9 @@ metallb-system speaker-5zptn 1/1 Running 2 metallb-system speaker-whw4n 1/1 Running 2 (22m ago) 26m 192.168.1.11 slave02.filter.home
✔ Ingress gateways installed ✔ Installation complete-###### IstioOperator_PublicIngress.yaml +##### IstioOperator_PublicIngress.yaml ```shell istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_PublicIngress.yaml @@ -1940,7 +1945,7 @@ istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_PublicIngress.
✔ Ingress gateways installed ✔ Installation complete-###### Check Service IP provisioning +##### Check Service IP provisioning ```shell kubectl get svc -n istio-system | grep LoadBalancer @@ -1950,15 +1955,15 @@ kubectl get svc -n istio-system | grep LoadBalancer istio-local-ingress LoadBalancer 10.97.14.59 192.168.1.21 15021:30005/TCP,80:30168/TCP,443:32103/TCP 50m istio-public-ingress LoadBalancer 10.100.53.247 192.168.1.20 15021:31249/TCP,80:30427/TCP,443:30411/TCP 50m -#### Prepare Secrets +### Prepare Secrets -##### Placeholder folder for God knows what +#### Placeholder folder for God knows what ```shell mkdir tmp ``` -##### Local Certs +#### Local Certs ```shell openssl req -x509 -newkey rsa:4096 -sha512 -days 365 -nodes \ @@ -2003,7 +2008,7 @@ Add the base64 outputs to the secrets file `P6_Redeployment/non_Istio_Config/Loc nano P6_Redeployment/non_Istio_Config/Local_Certs/Secret.yaml ``` -##### Modify Public Certificate duration +#### Modify Public Certificate duration Changed the duration to something more "reasonable". @@ -2011,7 +2016,7 @@ Changed the duration to something more "reasonable". nano P6_Redeployment/non_Istio_Config/Certificate_Manager/Issuer.yaml ``` -##### Modify Local Certificate duration +#### Modify Local Certificate duration Changed the duration to something more "reasonable". @@ -2019,7 +2024,7 @@ Changed the duration to something more "reasonable". nano P6_Redeployment/non_Istio_Config/Local_CA/Issuer.yaml ``` -###### Set up HTTP to HTTPS in the local gateway. +##### Set up HTTP to HTTPS in the local gateway. Added the following Gateway and modified the previous existing to remove `HTTP` access through the port 80. @@ -2043,7 +2048,7 @@ spec: httpsRedirect: true ``` -##### Deploy EVERYTHING not IstioOperator +#### Deploy EVERYTHING not IstioOperator Well it's time to press the create button, this will pop up some text, just make sure everything sates as "created". @@ -2077,7 +2082,7 @@ gateway.networking.istio.io/http-to-https-local created gateway.networking.istio.io/local-gateway created ``` -##### Remove ./tmp folder +#### Remove ./tmp folder We no longer need the folder ./tmp, therefore we can delete it. @@ -2092,17 +2097,17 @@ removed './tmp/ca.filter.home.key' ``` -##### Update the Router to point towards the "new" Ingress Load Balancer +#### Update the Router to point towards the "new" Ingress Load Balancer Changed from `192.168.1.80` to `192.168.1.20`. -##### Update Local DNS +#### Update Local DNS I did the needful. -##### Monitor Public Cert Provisioning +#### Monitor Public Cert Provisioning ```shell kubectl get events -n istio-system --field-selector involvedObject.name=filterhome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch @@ -2120,11 +2125,11 @@ LAST SEEN TYPE REASON OBJECT MES ``` -### Part 7 +## Part 7 I did set up the NVME with the OS etc. -#### Set up cluster +### Set up cluster Used the following ansible script join the host to the Kubernetes cluster, all I had to do was to populate the `Exported/kubeadm-join.command`. @@ -2352,7 +2357,7 @@ TASK [reboot] ****************************************************************** PLAY RECAP ******************************************************************************************************************************************************************************************************************************* slave01.filter.home : ok=38 changed=25 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 -##### Check Cluster node list +#### Check Cluster node list The node was added correctly.