|
|
|
@@ -141,7 +141,7 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
- [x] Deploy an Ingress LB for local thingies.
|
|
|
|
|
|
|
|
|
|
> **Note:**\
|
|
|
|
|
> **Note:**
|
|
|
|
|
> - https://istio.io/latest/docs/tasks/traffic-management/egress/
|
|
|
|
|
> - https://istio.io/latest/docs/tasks/traffic-management/egress/egress-kubernetes-services/
|
|
|
|
|
> - https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/
|
|
|
|
@@ -167,7 +167,7 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
- [x] Deploy configurations to route some services through the Istio `istio-local-ingress` Load Balancer deployed.
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
|
> Regarding Let's Encrypt certificate provisioning, for testing purposes the `staging` environment should be used, nevertheless on my scenario I am running directly on the production environment, why?\
|
|
|
|
|
> Regarding Let's Encrypt certificate provisioning, for testing purposes the `staging` environment should be used, nevertheless on my scenario I am running directly on the production environment, why?
|
|
|
|
|
> - `Staging` and `Production` behave different, therefore one can get the certificates verified on `Staging` and not on `Production`.
|
|
|
|
|
> - I ran into some issues regarding the sentence from above, so there was some back and forth, this topic is mentioned [here at the end.](#2x1-able-to-get-staging-le-certs-but-not-the-production-one-when-using-custom-istio-selector-and-only-able-to-get-production-le-certs-when-using-the-default-istio-ingressgateway-selector)
|
|
|
|
|
> - Since there was "back and forth" I sort of cheated and set this as it is.
|
|
|
|
@@ -182,7 +182,7 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
### Part 4
|
|
|
|
|
|
|
|
|
|
> Completed 27/July/2023\
|
|
|
|
|
> Completed 27/July/2023
|
|
|
|
|
|
|
|
|
|
- [x] Deploy locally a Certificate Authorization Service (on the SRV host.)
|
|
|
|
|
|
|
|
|
@@ -208,6 +208,8 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
### Part 5
|
|
|
|
|
|
|
|
|
|
> Completed 01/August/2023
|
|
|
|
|
|
|
|
|
|
- [x] Explore Pi4 Storage options.
|
|
|
|
|
|
|
|
|
|
- [x] Consider Storage options for the OrangePi5.
|
|
|
|
@@ -218,8 +220,10 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
### Part 6
|
|
|
|
|
|
|
|
|
|
> Completed 02/August/2023
|
|
|
|
|
|
|
|
|
|
- [x] ~~Wipe~~ (**don't wipe** just use a different drive) and recreate the current `Kluster`, this time using the Pi4 as a _master_, and the 2 Orange Pi5 as _slaves_ (this will require updating the DNS/DHCP local services).
|
|
|
|
|
- [ ] Deploy all the services from the previous Kubernetes cluster to the new one.
|
|
|
|
|
- [x] Deploy all the services from the previous Kubernetes cluster to the new one.
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
|
> I can make a new cluster on the Pi4, and remove the taint that prevents from scheduling pods on that node. Deploy everything inside (a well a LB with the same exact IP than the current one, and proceed to stop the Orange PI 5), then "reformat" the OPi5s with a new distro, install stuff etc., and join them to the cluster running on the Pi4.
|
|
|
|
@@ -230,18 +234,27 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
### Part 7
|
|
|
|
|
|
|
|
|
|
- [ ] Remove the last host from the old kubernetes cluster and join it to the new cluster.
|
|
|
|
|
> Completed 02/August/2023
|
|
|
|
|
|
|
|
|
|
- [x] Remove the last host from the old kubernetes cluster and join it to the new cluster.
|
|
|
|
|
|
|
|
|
|
### Part 8
|
|
|
|
|
|
|
|
|
|
- [ ] Deploy NFS service on the `media SRV` host.
|
|
|
|
|
- [x] Set wildcards certificates through `ACME DNS01` challenge.
|
|
|
|
|
|
|
|
|
|
### Part 9
|
|
|
|
|
|
|
|
|
|
- [ ] Deploy Istio security.
|
|
|
|
|
- [x] Deploy NFS service(s) on the `media SRV` host.
|
|
|
|
|
|
|
|
|
|
### Part 10
|
|
|
|
|
|
|
|
|
|
- [ ] Deploy Istio security.
|
|
|
|
|
|
|
|
|
|
> **Note:**\
|
|
|
|
|
> If there is barely workloads, don't think that it's wise to deploy a restrictive security settings as per the moment.
|
|
|
|
|
|
|
|
|
|
### Part 11
|
|
|
|
|
|
|
|
|
|
- [ ] Update the `Current Setup` documentation with the new container and architecture rearrangement.
|
|
|
|
|
|
|
|
|
|
- [ ] Migrate some lightweight/not data heavy services from the `media SRV` to the `Kluster`.
|
|
|
|
@@ -258,17 +271,14 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
- Run the old migrated services back on the `media SRV` host.
|
|
|
|
|
|
|
|
|
|
### Part 11
|
|
|
|
|
### TMP Notes
|
|
|
|
|
|
|
|
|
|
- Set wildcards certificates through `ACME DNS01` challenge.
|
|
|
|
|
|
|
|
|
|
### Extras?
|
|
|
|
|
|
|
|
|
|
#### Horizontal Pod Autoscaling for the Istio LBs.
|
|
|
|
|
- https://github.com/joohoi/acme-dns
|
|
|
|
|
|
|
|
|
|
# Execution
|
|
|
|
|
|
|
|
|
|
## Part 1
|
|
|
|
|
|
|
|
|
|
### Transfer local network dependencies services from Pi4 to SRV.
|
|
|
|
|
|
|
|
|
|
#### Install ZIP on `Pi4`
|
|
|
|
@@ -1160,7 +1170,7 @@ I updated the Local DNS to point towards the new-architecture/new-resources.
|
|
|
|
|
First, let's generate a certificate, and it's key.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
openssl req -x509 -newkey rsa:4096 -sha256 -days 5 -nodes \
|
|
|
|
|
openssl req -x509 -newkey rsa:4096 -sha256 -days 365 -nodes \
|
|
|
|
|
-keyout ca.filter.home.key -out ca.filter.home.cer \
|
|
|
|
|
-subj /C=ES/ST=BAR/O=FilterHome/CN=ca.filter.home \
|
|
|
|
|
-extensions ext \
|
|
|
|
@@ -1291,7 +1301,7 @@ x-envoy-upstream-service-time: 2
|
|
|
|
|
> It's extremely possible that I set HTTP to HTTPS redirect also locally, still need to decide if there is any reason for which I would like to maintain the local HTTP traffic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Part 5
|
|
|
|
|
## Part 5
|
|
|
|
|
|
|
|
|
|
I decided to use a SSD with the Pi4 through a USB3.0 connector.
|
|
|
|
|
|
|
|
|
@@ -1299,7 +1309,7 @@ On the Orange Pi5 I intend to use an m.2 that I bought for it, yet need to try i
|
|
|
|
|
|
|
|
|
|
Therefore, I will remove 1 node from the current Kubernetes cluster and try there the m.2 that I got.
|
|
|
|
|
|
|
|
|
|
#### Remove 1 OrangePI Node
|
|
|
|
|
### Remove 1 OrangePI Node
|
|
|
|
|
|
|
|
|
|
https://stackoverflow.com/questions/35757620/how-to-gracefully-remove-a-node-from-kubernetes
|
|
|
|
|
|
|
|
|
@@ -1353,7 +1363,7 @@ error when evicting pods/"istio-public-ingress-5bb994c8b7-j9hgr" -n "istio-syste
|
|
|
|
|
Well, it's on it, so let's open a new shell.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Cannot evict pod as it would violate the pod's disruption budget.
|
|
|
|
|
### Cannot evict pod as it would violate the pod's disruption budget.
|
|
|
|
|
|
|
|
|
|
Let's delete the "remaining" pods manually.
|
|
|
|
|
|
|
|
|
@@ -1430,7 +1440,7 @@ kubectl delete node slave01.filter.home
|
|
|
|
|
node "slave01.filter.home" deleted
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Part 6
|
|
|
|
|
## Part 6
|
|
|
|
|
|
|
|
|
|
Currently, the resources whereas:
|
|
|
|
|
|
|
|
|
@@ -1449,17 +1459,17 @@ I will change it to
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Prepare NVME images etc.
|
|
|
|
|
### Prepare NVME images etc.
|
|
|
|
|
|
|
|
|
|
Did the needful.
|
|
|
|
|
|
|
|
|
|
#### Set up cluster
|
|
|
|
|
### Set up cluster
|
|
|
|
|
|
|
|
|
|
Used the following ansible script to set up things arround.
|
|
|
|
|
|
|
|
|
|
https://gitea.filterhome.xyz/ofilter/ansible_kubernetes_cluster
|
|
|
|
|
|
|
|
|
|
##### run.sh
|
|
|
|
|
#### run.sh
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#FF7F7F"><b>✗</b></span> ./run.sh
|
|
|
|
|
|
|
|
|
@@ -1755,19 +1765,8 @@ PLAY RECAP *********************************************************************
|
|
|
|
|
<span style="color:#CC3980">slave02.filter.home</span> : <span style="color:#7F3FBF">ok=12 </span> <span style="color:#CC3980">changed=23 </span> unreachable=0 failed=0 <span style="color:#7f7fff">skipped=12 </span> rescued=0 ignored=0
|
|
|
|
|
</pre>
|
|
|
|
|
|
|
|
|
|
#### Check cluster status
|
|
|
|
|
### Check cluster status
|
|
|
|
|
|
|
|
|
|
First I will copy the kubeconfig file to a place of my own.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
cp ksetup/Exported/kubeconfig.conf ~/kubeconfig.conf -v
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
'ksetup/Exported/kubeconfig.conf' -> '/home/savagebidoof/kubeconfig.conf'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Pods are deployed correctly
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get pods --kubeconfig ~/kubeconfig.conf -A -owide
|
|
|
|
@@ -1791,19 +1790,718 @@ metallb-system speaker-5zptn 1/1 Running 2
|
|
|
|
|
metallb-system speaker-whw4n 1/1 Running 2 (22m ago) 26m 192.168.1.11 slave02.filter.home <none> <none>
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Kubeconfig
|
|
|
|
|
|
|
|
|
|
#### Backup `Kubeconfig` file
|
|
|
|
|
|
|
|
|
|
I will back up `kubeconfig.conf` file to a directory of my own.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
cp ksetup/Exported/kubeconfig.conf ~/kubeconfig.conf -v
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
'ksetup/Exported/kubeconfig.conf' -> '/home/savagebidoof/kubeconfig.conf'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Pods are deployed correctly
|
|
|
|
|
|
|
|
|
|
#### Configure new `Kubeconfig` location
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
export KUBECONFIG="/home/savagebidoof/kubeconfig.conf"
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Confirm `Kubeconfig` is selected properly
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get nodes
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
NAME STATUS ROLES AGE VERSION
|
|
|
|
|
pi4.filter.home Ready control-plane 18h v1.27.4
|
|
|
|
|
slave02.filter.home Ready <none> 17h v1.27.4
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Move workloads
|
|
|
|
|
|
|
|
|
|
Well it's time to move everything over.
|
|
|
|
|
|
|
|
|
|
During the couple of days I have been waiting until I get the NVME I ordered, considered changing the IP from the deployed LBs.
|
|
|
|
|
|
|
|
|
|
Instead of using:
|
|
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
|
.80 -> public LB
|
|
|
|
|
.81 -> local LB
|
|
|
|
|
.90 -> egress LB
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
I will be using
|
|
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
|
.20 -> public LB
|
|
|
|
|
.21 -> local LB
|
|
|
|
|
.39 -> egress LB
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Deploy lacking CRDs
|
|
|
|
|
|
|
|
|
|
I already have `MetalLB` and `Calico` installed.
|
|
|
|
|
|
|
|
|
|
I lack Cert-manager CRDs.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
namespace/cert-manager created
|
|
|
|
|
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
|
|
|
|
|
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
|
|
|
|
|
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
|
|
|
|
|
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
|
|
|
|
|
...
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### MetalLB Config
|
|
|
|
|
|
|
|
|
|
I am setting up 2 Address Pools.
|
|
|
|
|
|
|
|
|
|
`cherrypick` for the things that I want to give IPs with my finger.
|
|
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
|
kind: IPAddressPool
|
|
|
|
|
...
|
|
|
|
|
name: cherrypick
|
|
|
|
|
- 192.168.1.20-192.168.1.39
|
|
|
|
|
...
|
|
|
|
|
autoAssign: false
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
And `flex` to assign IP dynamically.
|
|
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
|
kind: IPAddressPool
|
|
|
|
|
...
|
|
|
|
|
name: flex
|
|
|
|
|
- 192.168.1.41-192.168.1.60
|
|
|
|
|
...
|
|
|
|
|
autoAssign: true
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
##### Deploy MetalLB configuration
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl create -f P6_Redeployment/MetalLB.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# I am here <----
|
|
|
|
|
```text
|
|
|
|
|
ipaddresspool.metallb.io/cherrypick created
|
|
|
|
|
ipaddresspool.metallb.io/flex created
|
|
|
|
|
l2advertisement.metallb.io/l2-advert created
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Should add labels to the SE resources etc.
|
|
|
|
|
|
|
|
|
|
#### Deploy Istio Config and Load Balancers
|
|
|
|
|
|
|
|
|
|
##### IstioOperator_IstioConfig.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioConfig.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre>This will install the Istio 1.18.2 minimal profile with ["Istio core" "Istiod"] components into the cluster. Proceed? (y/N) y
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Istio core installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Istiod installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete
|
|
|
|
|
Making this installation the default for injection and validation.</pre>
|
|
|
|
|
|
|
|
|
|
##### IstioOperator_IstioEgress.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioEgress.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#7F3FBF">✔</span> Egress gateways installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
|
> The egress resource doesn't work "right off the bat", requires some configurations regarding this matter, don't think this will be done on this "walkthrough".
|
|
|
|
|
|
|
|
|
|
##### IstioOperator_LocalIngress.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_LocalIngress.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete</pre>
|
|
|
|
|
|
|
|
|
|
##### IstioOperator_PublicIngress.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_PublicIngress.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
|
|
|
|
|
|
|
|
|
|
##### Check Service IP provisioning
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get svc -n istio-system | grep LoadBalancer
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre>egress-gw <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.106.41.20 192.168.1.39 80:31322/TCP,443:30559/TCP 138m
|
|
|
|
|
istio-local-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.97.14.59 192.168.1.21 15021:30005/TCP,80:30168/TCP,443:32103/TCP 50m
|
|
|
|
|
istio-public-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.100.53.247 192.168.1.20 15021:31249/TCP,80:30427/TCP,443:30411/TCP 50m</pre>
|
|
|
|
|
|
|
|
|
|
### Prepare Secrets
|
|
|
|
|
|
|
|
|
|
#### Placeholder folder for God knows what
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
mkdir tmp
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Local Certs
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
openssl req -x509 -newkey rsa:4096 -sha512 -days 365 -nodes \
|
|
|
|
|
-keyout tmp/ca.filter.home.key -out tmp/ca.filter.home.cer \
|
|
|
|
|
-subj /C=ES/ST=BAR/O=FilterHome/CN=ca.filter.home \
|
|
|
|
|
-extensions ext \
|
|
|
|
|
-config <(cat <<EOF
|
|
|
|
|
[req]
|
|
|
|
|
distinguished_name=req
|
|
|
|
|
[ext]
|
|
|
|
|
keyUsage=critical,keyCertSign,cRLSign
|
|
|
|
|
basicConstraints=critical,CA:true,pathlen:1
|
|
|
|
|
subjectAltName=DNS:ca.filter.home
|
|
|
|
|
EOF
|
|
|
|
|
)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
cmdsubst heredoc> [req]
|
|
|
|
|
cmdsubst heredoc> distinguished_name=req
|
|
|
|
|
cmdsubst heredoc> [ext]
|
|
|
|
|
cmdsubst heredoc> keyUsage=critical,keyCertSign,cRLSign
|
|
|
|
|
cmdsubst heredoc> basicConstraints=critical,CA:true,pathlen:1
|
|
|
|
|
cmdsubst heredoc> subjectAltName=DNS:ca.filter.home
|
|
|
|
|
cmdsubst heredoc> EOF
|
|
|
|
|
cmdsubst> )
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Export `.key` and `.cer` base64
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
cat tmp/ca.filter.home.cer | base64 | tr -d '\n'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
cat tmp/ca.filter.home.key | base64 | tr -d '\n'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Add the base64 outputs to the secrets file `P6_Redeployment/non_Istio_Config/Local_Certs/Secret.yaml`
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
nano P6_Redeployment/non_Istio_Config/Local_Certs/Secret.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Modify Public Certificate duration
|
|
|
|
|
|
|
|
|
|
Changed the duration to something more "reasonable".
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
nano P6_Redeployment/non_Istio_Config/Certificate_Manager/Issuer.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Modify Local Certificate duration
|
|
|
|
|
|
|
|
|
|
Changed the duration to something more "reasonable".
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
nano P6_Redeployment/non_Istio_Config/Local_CA/Issuer.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
##### Set up HTTP to HTTPS in the local gateway.
|
|
|
|
|
|
|
|
|
|
Added the following Gateway and modified the previous existing to remove `HTTP` access through the port 80.
|
|
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
|
apiVersion: networking.istio.io/v1alpha3
|
|
|
|
|
kind: Gateway
|
|
|
|
|
metadata:
|
|
|
|
|
name: http-to-https-local
|
|
|
|
|
namespace: default
|
|
|
|
|
spec:
|
|
|
|
|
selector:
|
|
|
|
|
istio: public-ingress
|
|
|
|
|
servers:
|
|
|
|
|
- port:
|
|
|
|
|
number: 80
|
|
|
|
|
name: http
|
|
|
|
|
protocol: HTTP
|
|
|
|
|
hosts:
|
|
|
|
|
- "*"
|
|
|
|
|
tls:
|
|
|
|
|
httpsRedirect: true
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Deploy EVERYTHING not IstioOperator
|
|
|
|
|
|
|
|
|
|
Well it's time to press the create button, this will pop up some text, just make sure everything sates as "created".
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl create -f P6_Redeployment/non_Istio_Config -R
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
clusterissuer.cert-manager.io/letsencrypt-public created
|
|
|
|
|
certificate.cert-manager.io/filterhome-domain-cert-public created
|
|
|
|
|
namespace/external created
|
|
|
|
|
destinationrule.networking.istio.io/filebrowser created
|
|
|
|
|
serviceentry.networking.istio.io/filebrowser-se created
|
|
|
|
|
virtualservice.networking.istio.io/filebrowser-vs created
|
|
|
|
|
gateway.networking.istio.io/http-to-https-public created
|
|
|
|
|
gateway.networking.istio.io/public-gateway created
|
|
|
|
|
destinationrule.networking.istio.io/gitea created
|
|
|
|
|
serviceentry.networking.istio.io/gitea-se created
|
|
|
|
|
virtualservice.networking.istio.io/gitea-vs created
|
|
|
|
|
workloadentry.networking.istio.io/srv-host created
|
|
|
|
|
destinationrule.networking.istio.io/jelly created
|
|
|
|
|
serviceentry.networking.istio.io/jelly-se created
|
|
|
|
|
virtualservice.networking.istio.io/jelly-vs created
|
|
|
|
|
destinationrule.networking.istio.io/tube created
|
|
|
|
|
serviceentry.networking.istio.io/tube-se created
|
|
|
|
|
virtualservice.networking.istio.io/tube-vs created
|
|
|
|
|
clusterissuer.cert-manager.io/ca-issuer created
|
|
|
|
|
certificate.cert-manager.io/local-wildcard-certificate created
|
|
|
|
|
secret/local-ca created
|
|
|
|
|
gateway.networking.istio.io/http-to-https-local created
|
|
|
|
|
gateway.networking.istio.io/local-gateway created
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Remove ./tmp folder
|
|
|
|
|
|
|
|
|
|
We no longer need the folder ./tmp, therefore we can delete it.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
rm ./tmp/* -v
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
zsh: sure you want to delete all 4 files in /home/savagebidoof/IdeaProjects/home_shit/Migrations/Forget_Traefik_2023/./tmp [yn]? y
|
|
|
|
|
removed './tmp/ca.filter.home.cer'
|
|
|
|
|
removed './tmp/ca.filter.home.key'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Update the Router to point towards the "new" Ingress Load Balancer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Changed from `192.168.1.80` to `192.168.1.20`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Update Local DNS
|
|
|
|
|
|
|
|
|
|
I did the needful.
|
|
|
|
|
|
|
|
|
|
#### Monitor Public Cert Provisioning
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get events -n istio-system --field-selector involvedObject.name=filterhome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
LAST SEEN TYPE REASON OBJECT MESSAGE
|
|
|
|
|
13m Normal Issuing certificate/filterhome-domain-cert-public Issuing certificate as Secret does not exist
|
|
|
|
|
12m Normal Generated certificate/filterhome-domain-cert-public Stored new private key in temporary Secret resource "filterhome-domain-cert-public-2vdxk"
|
|
|
|
|
12m Normal Requested certificate/filterhome-domain-cert-public Created new CertificateRequest resource "filterhome-domain-cert-public-js69j"
|
|
|
|
|
8m46s Normal Issuing certificate/filterhome-domain-cert-public Issuing certificate as Secret does not exist
|
|
|
|
|
8m35s Normal Generated certificate/filterhome-domain-cert-public Stored new private key in temporary Secret resource "filterhome-domain-cert-public-n8w8s"
|
|
|
|
|
8m35s Normal Requested certificate/filterhome-domain-cert-public Created new CertificateRequest resource "filterhome-domain-cert-public-cb8ws"
|
|
|
|
|
103s Normal Issuing certificate/filterhome-domain-cert-public The certificate has been successfully issued
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Part 7
|
|
|
|
|
|
|
|
|
|
I did set up the NVME with the OS etc.
|
|
|
|
|
|
|
|
|
|
### Set up cluster
|
|
|
|
|
|
|
|
|
|
Used the following ansible script join the host to the Kubernetes cluster, all I had to do was to populate the `Exported/kubeadm-join.command`.
|
|
|
|
|
|
|
|
|
|
https://gitea.filterhome.xyz/ofilter/ansible_kubernetes_cluster
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Difficulties
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#FF7F7F"><b>✗</b></span> ./run.sh
|
|
|
|
|
|
|
|
|
|
PLAY [Pre Setup] *************************************************************************************************************************************************************************************************************************
|
|
|
|
|
|
|
|
|
|
TASK [apt update] ************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [apt upgrade] ***********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Install sudo] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [update facts] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Make sure we have a 'wheel' group] *************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Allow 'wheel' group to have passwordless sudo] *************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add user new_user_name] ************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">[DEPRECATION WARNING]: Encryption using the Python crypt module is deprecated. The Python crypt module is deprecated and will be removed from Python 3.13. Install the passlib library for continued encryption functionality. This </span>
|
|
|
|
|
<span style="color:#7F3FBF">feature will be removed in version 2.17. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.</span>
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [adding user 'klussy' to group wheel] ***********************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Select new user] *******************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Change root default password] ******************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">[DEPRECATION WARNING]: Encryption using the Python crypt module is deprecated. The Python crypt module is deprecated and will be removed from Python 3.13. Install the passlib library for continued encryption functionality. This </span>
|
|
|
|
|
<span style="color:#7F3FBF">feature will be removed in version 2.17. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.</span>
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [PermitRootLogin = no] **************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [PermitEmptyPasswords = no] *********************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Ensure localisation files for 'en_US.UTF-8' are available] *************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Ensure localisation files for 'en_US.UTF-8' are available] *************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Get current locale and language configuration] *************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Configure locale to 'en_US.UTF-8' and language to 'en_US.UTF-8'] *******************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [reboot] ****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">slave01.filter.home</span> : <span style="color:#7F3FBF">ok=10 </span> <span style="color:#CC3980">changed=11 </span> unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 </pre>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<pre>PLAY [Preparethings] *********************************************************************************************************************************************************************************************************************
|
|
|
|
|
|
|
|
|
|
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [debug] *****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home] => {</span>
|
|
|
|
|
<span style="color:#7F3FBF"> "set_hostname": "slave01.filter.home"</span>
|
|
|
|
|
<span style="color:#7F3FBF">}</span>
|
|
|
|
|
|
|
|
|
|
TASK [debug] *****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home] => {</span>
|
|
|
|
|
<span style="color:#7F3FBF"> "is_master": false</span>
|
|
|
|
|
<span style="color:#7F3FBF">}</span>
|
|
|
|
|
|
|
|
|
|
TASK [Set a hostname] ********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Swapoff] ***************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Disable ram on boot (orangepi)] ****************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Disable ram on boot (armbian)] *****************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [apt prune containerd] **************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [apt update] ************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [apt upgrade] ***********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [apt install gnupg] *****************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Creating a new directory] **********************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add Docker GPG key] ****************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add Docker APT repository] *********************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add Kubernetes GPG key] ************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add Kubernetes APT repository] *****************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [apt update] ************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Install Kubelet Kubeadm Kubectl] ***************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Hold kubeadm] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Hold kubelet] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Hold kubectl] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Install Container Runtime] *********************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Containerd set default config] *****************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [SystemdCgroup = true] **************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Iptables thingies (not touching specific firewall rules.)] *************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Iptables thingies] *****************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add the overlay module] ************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Add the br_netfilter module] *******************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Apply changes (might need to use sysctl module with the reload flag, will try eventually)] *****************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Enable kubelet] ********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Enable containerd] *****************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Enable kubelet] ********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Restart containerd] ****************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Set /etc/hosts file content (template/base)] ***************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Init cluster] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Export remote kubeconfig file] *****************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Remove Taint (allows deployment in control plane node)] ****************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Calico] ****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [MetalLB] ***************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Generate join token] ***************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [set_fact] **************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [debug] *****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Store join command in "./Exported/kubeadm-join.command"] ***************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Populate] **************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [debug] *****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#7F3FBF">ok: [slave01.filter.home] => {</span>
|
|
|
|
|
<span style="color:#7F3FBF"> "_kubeadm_join_command": "kubeadm join 192.168.1.9:6443 --token fjw9iy.0v1vim764ls5mpyp --discovery-token-ca-cert-hash sha256:028116b6076078c15bd4212b8a96ecc159507a07df11d7f9859b2593533616c3"</span>
|
|
|
|
|
<span style="color:#7F3FBF">}</span>
|
|
|
|
|
|
|
|
|
|
TASK [Join kubeadm] **********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [Delete user] ***********************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
TASK [reboot] ****************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
|
|
|
|
|
|
|
|
|
|
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
|
|
|
|
|
<span style="color:#CC3980">slave01.filter.home</span> : <span style="color:#7F3FBF">ok=38 </span> <span style="color:#CC3980">changed=25 </span> unreachable=0 failed=0 <span style="color:#7F7FFF">skipped=10 </span> rescued=0 ignored=0</pre>
|
|
|
|
|
|
|
|
|
|
#### Check Cluster node list
|
|
|
|
|
|
|
|
|
|
The node was added correctly.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get nodes
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
NAME STATUS ROLES AGE VERSION
|
|
|
|
|
pi4.filter.home Ready control-plane 22h v1.27.4
|
|
|
|
|
slave01.filter.home Ready <none> 78s v1.27.4
|
|
|
|
|
slave02.filter.home Ready <none> 21h v1.27.4
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Part 8
|
|
|
|
|
|
|
|
|
|
I bought the domain `fihome.xyz`.
|
|
|
|
|
|
|
|
|
|
### Install helm
|
|
|
|
|
|
|
|
|
|
I did follow the process documented from [here](https://helm.sh/docs/intro/install/)
|
|
|
|
|
|
|
|
|
|
Also, didn't saved the output, but the process it's fairly simple soo ... GL!
|
|
|
|
|
|
|
|
|
|
### Update the other reverse proxies to accept ingress from the domain `fihome.xyz`
|
|
|
|
|
|
|
|
|
|
On my scenario I had to update the reverse proxy from the host `srv` to allow from that domain.
|
|
|
|
|
|
|
|
|
|
As well had to modify the services that I am intented to serve publicly.
|
|
|
|
|
|
|
|
|
|
### Install webhook
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The repository I will be using is:
|
|
|
|
|
|
|
|
|
|
https://github.com/snowdrop/godaddy-webhook
|
|
|
|
|
|
|
|
|
|
Following the installation steps listed in [here](https://github.com/snowdrop/godaddy-webhook#the-godaddy-webhook).
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
helm repo add godaddy-webhook https://fred78290.github.io/cert-manager-webhook-godaddy/
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
"godaddy-webhook" has been added to your repositories
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
helm repo update
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
Hang tight while we grab the latest from your chart repositories...
|
|
|
|
|
...Successfully got an update from the "godaddy-webhook" chart repository
|
|
|
|
|
...Successfully got an update from the "nfs-subdir-external-provisioner" chart repository
|
|
|
|
|
Update Complete. ⎈Happy Helming!⎈
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
helm upgrade -i godaddy-webhook godaddy-webhook/godaddy-webhook \
|
|
|
|
|
--set groupName=acme.fihome.xyz \
|
|
|
|
|
--set image.tag=v1.27.2 \
|
|
|
|
|
--set image.pullPolicy=Always \
|
|
|
|
|
--namespace cert-manager
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Update VirtualServices and deploy an Istio Gateway config
|
|
|
|
|
|
|
|
|
|
This includes adding the `fihome.xyz` domain to the VirtualService resources, and adding a TLS entry for the respective `Istio Gateway` configuration.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f P8_Ingress -R
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
gateway.networking.istio.io/fihome-gateway created
|
|
|
|
|
virtualservice.networking.istio.io/filebrowser-vs configured
|
|
|
|
|
virtualservice.networking.istio.io/gitea-vs configured
|
|
|
|
|
virtualservice.networking.istio.io/jelly-vs configured
|
|
|
|
|
virtualservice.networking.istio.io/tube-vs configured
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Deploy cert-manager config for the secondary domain
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f P8_Fihome_Cert -R
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
clusterissuer.cert-manager.io/letsencrypt-fihome created
|
|
|
|
|
certificate.cert-manager.io/fihome-domain-cert-public created
|
|
|
|
|
secret/fihome-godaddy-api-key created
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Monitor the certificate provisioning
|
|
|
|
|
|
|
|
|
|
#### Check Certificate Issuer status
|
|
|
|
|
|
|
|
|
|
Well, it was very fast on my scenario, but since we are doing a DNS challenge, it could take some time, specially if we required to wait for the DNS propagation, which, as a rule of thumb, can take up to 72h.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get -n istio-system certificate fihome-domain-cert-public -o jsonpath='{.metadata.name}{"\t"}{.status.conditions[].reason}{"\t"}{.status.conditions[].message}{"\n"}'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
fihome-domain-cert-public Ready Certificate is up to date and has not expired
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Certificate logs
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get events -n istio-system --field-selector involvedObject.name=fihome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
LAST SEEN TYPE REASON OBJECT MESSAGE
|
|
|
|
|
64s Normal Issuing certificate/fihome-domain-cert-public Issuing certificate as Secret does not exist
|
|
|
|
|
64s Normal Generated certificate/fihome-domain-cert-public Stored new private key in temporary Secret resource "fihome-domain-cert-public-wz9hv"
|
|
|
|
|
64s Normal Requested certificate/fihome-domain-cert-public Created new CertificateRequest resource "fihome-domain-cert-public-1"
|
|
|
|
|
61s Normal Issuing certificate/fihome-domain-cert-public The certificate has been successfully issued
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Part 9
|
|
|
|
|
|
|
|
|
|
Well, I did not document this either, yet I did deploy the following NFS provider from this repo:
|
|
|
|
|
|
|
|
|
|
- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
|
|
|
|
|
|
|
|
|
|
So far it's been working correctly, can't say I tested much from it, the services I am running are working correctly so, so far so good.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Part 10
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Difficulties
|
|
|
|
|
|
|
|
|
|
The Certificate Manager must be located at the same namespace as the `istiod` service, without the LB location being taken into account.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## no healthy upstream
|
|
|
|
|
|
|
|
|
|
During the "testing of configurations", I created a the `DestinationRule` for the Jelly on the namespace `default`, instead of `external`.
|
|
|
|
@@ -1936,7 +2634,7 @@ Let's check the "access logs" differences when using one selector or another.
|
|
|
|
|
#### Custom
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get logs -l istio=public-ingress -n istio-system
|
|
|
|
|
kubectl logs -l istio=public-ingress -n istio-system
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|