Fixing header formatting.
This commit is contained in:
parent
40010b8c2a
commit
cd7fdbdd16
@ -234,6 +234,8 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
||||
|
||||
### Part 7
|
||||
|
||||
> Completed 02/August/2023
|
||||
|
||||
- [x] Remove the last host from the old kubernetes cluster and join it to the new cluster.
|
||||
|
||||
### Part 8
|
||||
@ -244,6 +246,9 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
||||
|
||||
- [ ] Deploy Istio security.
|
||||
|
||||
> **Note:**\
|
||||
> If there is barely workloads, don't think that it's wise to deploy a restrictive security settings as per the moment.
|
||||
|
||||
### Part 10
|
||||
|
||||
- [ ] Update the `Current Setup` documentation with the new container and architecture rearrangement.
|
||||
@ -1292,7 +1297,7 @@ x-envoy-upstream-service-time: 2
|
||||
> It's extremely possible that I set HTTP to HTTPS redirect also locally, still need to decide if there is any reason for which I would like to maintain the local HTTP traffic.
|
||||
|
||||
|
||||
### Part 5
|
||||
## Part 5
|
||||
|
||||
I decided to use a SSD with the Pi4 through a USB3.0 connector.
|
||||
|
||||
@ -1300,7 +1305,7 @@ On the Orange Pi5 I intend to use an m.2 that I bought for it, yet need to try i
|
||||
|
||||
Therefore, I will remove 1 node from the current Kubernetes cluster and try there the m.2 that I got.
|
||||
|
||||
#### Remove 1 OrangePI Node
|
||||
### Remove 1 OrangePI Node
|
||||
|
||||
https://stackoverflow.com/questions/35757620/how-to-gracefully-remove-a-node-from-kubernetes
|
||||
|
||||
@ -1354,7 +1359,7 @@ error when evicting pods/"istio-public-ingress-5bb994c8b7-j9hgr" -n "istio-syste
|
||||
Well, it's on it, so let's open a new shell.
|
||||
|
||||
|
||||
#### Cannot evict pod as it would violate the pod's disruption budget.
|
||||
### Cannot evict pod as it would violate the pod's disruption budget.
|
||||
|
||||
Let's delete the "remaining" pods manually.
|
||||
|
||||
@ -1431,7 +1436,7 @@ kubectl delete node slave01.filter.home
|
||||
node "slave01.filter.home" deleted
|
||||
```
|
||||
|
||||
### Part 6
|
||||
## Part 6
|
||||
|
||||
Currently, the resources whereas:
|
||||
|
||||
@ -1450,17 +1455,17 @@ I will change it to
|
||||
```
|
||||
|
||||
|
||||
#### Prepare NVME images etc.
|
||||
### Prepare NVME images etc.
|
||||
|
||||
Did the needful.
|
||||
|
||||
#### Set up cluster
|
||||
### Set up cluster
|
||||
|
||||
Used the following ansible script to set up things arround.
|
||||
|
||||
https://gitea.filterhome.xyz/ofilter/ansible_kubernetes_cluster
|
||||
|
||||
##### run.sh
|
||||
#### run.sh
|
||||
|
||||
<pre><span style="color:#FF7F7F"><b>✗</b></span> ./run.sh
|
||||
|
||||
@ -1756,7 +1761,7 @@ PLAY RECAP *********************************************************************
|
||||
<span style="color:#CC3980">slave02.filter.home</span> : <span style="color:#7F3FBF">ok=12 </span> <span style="color:#CC3980">changed=23 </span> unreachable=0 failed=0 <span style="color:#7f7fff">skipped=12 </span> rescued=0 ignored=0
|
||||
</pre>
|
||||
|
||||
#### Check cluster status
|
||||
### Check cluster status
|
||||
|
||||
|
||||
```shell
|
||||
@ -1781,9 +1786,9 @@ metallb-system speaker-5zptn 1/1 Running 2
|
||||
metallb-system speaker-whw4n 1/1 Running 2 (22m ago) 26m 192.168.1.11 slave02.filter.home <none> <none>
|
||||
```
|
||||
|
||||
#### Kubeconfig
|
||||
### Kubeconfig
|
||||
|
||||
##### Backup `Kubeconfig` file
|
||||
#### Backup `Kubeconfig` file
|
||||
|
||||
I will back up `kubeconfig.conf` file to a directory of my own.
|
||||
|
||||
@ -1797,13 +1802,13 @@ cp ksetup/Exported/kubeconfig.conf ~/kubeconfig.conf -v
|
||||
|
||||
Pods are deployed correctly
|
||||
|
||||
##### Configure new `Kubeconfig` location
|
||||
#### Configure new `Kubeconfig` location
|
||||
|
||||
```shell
|
||||
export KUBECONFIG="/home/savagebidoof/kubeconfig.conf"
|
||||
```
|
||||
|
||||
##### Confirm `Kubeconfig` is selected properly
|
||||
#### Confirm `Kubeconfig` is selected properly
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
@ -1815,7 +1820,7 @@ pi4.filter.home Ready control-plane 18h v1.27.4
|
||||
slave02.filter.home Ready <none> 17h v1.27.4
|
||||
```
|
||||
|
||||
#### Move workloads
|
||||
### Move workloads
|
||||
|
||||
Well it's time to move everything over.
|
||||
|
||||
@ -1837,7 +1842,7 @@ I will be using
|
||||
.39 -> egress LB
|
||||
```
|
||||
|
||||
##### Deploy lacking CRDs
|
||||
#### Deploy lacking CRDs
|
||||
|
||||
I already have `MetalLB` and `Calico` installed.
|
||||
|
||||
@ -1856,7 +1861,7 @@ customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io cre
|
||||
...
|
||||
```
|
||||
|
||||
##### MetalLB Config
|
||||
#### MetalLB Config
|
||||
|
||||
I am setting up 2 Address Pools.
|
||||
|
||||
@ -1882,7 +1887,7 @@ kind: IPAddressPool
|
||||
autoAssign: true
|
||||
```
|
||||
|
||||
###### Deploy MetalLB configuration
|
||||
##### Deploy MetalLB configuration
|
||||
|
||||
```shell
|
||||
kubectl create -f P6_Redeployment/MetalLB.yaml
|
||||
@ -1896,9 +1901,9 @@ l2advertisement.metallb.io/l2-advert created
|
||||
```
|
||||
|
||||
|
||||
##### Deploy Istio Config and Load Balancers
|
||||
#### Deploy Istio Config and Load Balancers
|
||||
|
||||
###### IstioOperator_IstioConfig.yaml
|
||||
##### IstioOperator_IstioConfig.yaml
|
||||
|
||||
```shell
|
||||
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioConfig.yaml
|
||||
@ -1910,7 +1915,7 @@ istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioConfig.ya
|
||||
<span style="color:#7F3FBF">✔</span> Installation complete
|
||||
Making this installation the default for injection and validation.</pre>
|
||||
|
||||
###### IstioOperator_IstioEgress.yaml
|
||||
##### IstioOperator_IstioEgress.yaml
|
||||
|
||||
```shell
|
||||
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioEgress.yaml
|
||||
@ -1922,7 +1927,7 @@ istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioEgress.ya
|
||||
> **Note**:\
|
||||
> The egress resource doesn't work "right off the bat", requires some configurations regarding this matter, don't think this will be done on this "walkthrough".
|
||||
|
||||
###### IstioOperator_LocalIngress.yaml
|
||||
##### IstioOperator_LocalIngress.yaml
|
||||
|
||||
```shell
|
||||
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_LocalIngress.yaml
|
||||
@ -1931,7 +1936,7 @@ istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_LocalIngress.y
|
||||
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
|
||||
<span style="color:#7F3FBF">✔</span> Installation complete</pre>
|
||||
|
||||
###### IstioOperator_PublicIngress.yaml
|
||||
##### IstioOperator_PublicIngress.yaml
|
||||
|
||||
```shell
|
||||
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_PublicIngress.yaml
|
||||
@ -1940,7 +1945,7 @@ istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_PublicIngress.
|
||||
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
|
||||
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
|
||||
|
||||
###### Check Service IP provisioning
|
||||
##### Check Service IP provisioning
|
||||
|
||||
```shell
|
||||
kubectl get svc -n istio-system | grep LoadBalancer
|
||||
@ -1950,15 +1955,15 @@ kubectl get svc -n istio-system | grep LoadBalancer
|
||||
istio-local-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.97.14.59 192.168.1.21 15021:30005/TCP,80:30168/TCP,443:32103/TCP 50m
|
||||
istio-public-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.100.53.247 192.168.1.20 15021:31249/TCP,80:30427/TCP,443:30411/TCP 50m</pre>
|
||||
|
||||
#### Prepare Secrets
|
||||
### Prepare Secrets
|
||||
|
||||
##### Placeholder folder for God knows what
|
||||
#### Placeholder folder for God knows what
|
||||
|
||||
```shell
|
||||
mkdir tmp
|
||||
```
|
||||
|
||||
##### Local Certs
|
||||
#### Local Certs
|
||||
|
||||
```shell
|
||||
openssl req -x509 -newkey rsa:4096 -sha512 -days 365 -nodes \
|
||||
@ -2003,7 +2008,7 @@ Add the base64 outputs to the secrets file `P6_Redeployment/non_Istio_Config/Loc
|
||||
nano P6_Redeployment/non_Istio_Config/Local_Certs/Secret.yaml
|
||||
```
|
||||
|
||||
##### Modify Public Certificate duration
|
||||
#### Modify Public Certificate duration
|
||||
|
||||
Changed the duration to something more "reasonable".
|
||||
|
||||
@ -2011,7 +2016,7 @@ Changed the duration to something more "reasonable".
|
||||
nano P6_Redeployment/non_Istio_Config/Certificate_Manager/Issuer.yaml
|
||||
```
|
||||
|
||||
##### Modify Local Certificate duration
|
||||
#### Modify Local Certificate duration
|
||||
|
||||
Changed the duration to something more "reasonable".
|
||||
|
||||
@ -2019,7 +2024,7 @@ Changed the duration to something more "reasonable".
|
||||
nano P6_Redeployment/non_Istio_Config/Local_CA/Issuer.yaml
|
||||
```
|
||||
|
||||
###### Set up HTTP to HTTPS in the local gateway.
|
||||
##### Set up HTTP to HTTPS in the local gateway.
|
||||
|
||||
Added the following Gateway and modified the previous existing to remove `HTTP` access through the port 80.
|
||||
|
||||
@ -2043,7 +2048,7 @@ spec:
|
||||
httpsRedirect: true
|
||||
```
|
||||
|
||||
##### Deploy EVERYTHING not IstioOperator
|
||||
#### Deploy EVERYTHING not IstioOperator
|
||||
|
||||
Well it's time to press the create button, this will pop up some text, just make sure everything sates as "created".
|
||||
|
||||
@ -2077,7 +2082,7 @@ gateway.networking.istio.io/http-to-https-local created
|
||||
gateway.networking.istio.io/local-gateway created
|
||||
```
|
||||
|
||||
##### Remove ./tmp folder
|
||||
#### Remove ./tmp folder
|
||||
|
||||
We no longer need the folder ./tmp, therefore we can delete it.
|
||||
|
||||
@ -2092,17 +2097,17 @@ removed './tmp/ca.filter.home.key'
|
||||
```
|
||||
|
||||
|
||||
##### Update the Router to point towards the "new" Ingress Load Balancer
|
||||
#### Update the Router to point towards the "new" Ingress Load Balancer
|
||||
|
||||
|
||||
Changed from `192.168.1.80` to `192.168.1.20`.
|
||||
|
||||
|
||||
##### Update Local DNS
|
||||
#### Update Local DNS
|
||||
|
||||
I did the needful.
|
||||
|
||||
##### Monitor Public Cert Provisioning
|
||||
#### Monitor Public Cert Provisioning
|
||||
|
||||
```shell
|
||||
kubectl get events -n istio-system --field-selector involvedObject.name=filterhome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
|
||||
@ -2120,11 +2125,11 @@ LAST SEEN TYPE REASON OBJECT MES
|
||||
```
|
||||
|
||||
|
||||
### Part 7
|
||||
## Part 7
|
||||
|
||||
I did set up the NVME with the OS etc.
|
||||
|
||||
#### Set up cluster
|
||||
### Set up cluster
|
||||
|
||||
Used the following ansible script join the host to the Kubernetes cluster, all I had to do was to populate the `Exported/kubeadm-join.command`.
|
||||
|
||||
@ -2352,7 +2357,7 @@ TASK [reboot] ******************************************************************
|
||||
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
|
||||
<span style="color:#CC3980">slave01.filter.home</span> : <span style="color:#7F3FBF">ok=38 </span> <span style="color:#CC3980">changed=25 </span> unreachable=0 failed=0 <span style="color:#7F7FFF">skipped=10 </span> rescued=0 ignored=0</pre>
|
||||
|
||||
##### Check Cluster node list
|
||||
#### Check Cluster node list
|
||||
|
||||
The node was added correctly.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user