6 Commits

Author SHA1 Message Date
savagebidoof
57a8288769 Part 8 and 9 documented (even tho part 9 was just a link to the repo used ... ) 2023-10-20 20:30:53 +02:00
savagebidoof
efcb916c9f Part 8 and 9 documented (even tho part 9 was just a link to the repo used ... ) 2023-10-20 20:30:25 +02:00
savagebidoof
cd7fdbdd16 Fixing header formatting. 2023-08-02 17:27:11 +02:00
savagebidoof
40010b8c2a Added slave01 to the clsuter.
Part 7 documented.
2023-08-02 17:21:16 +02:00
savagebidoof
19d8748741 The "new" cluster got all the configurations set on the "old" cluster.
Certificates already provisioned.

Part 6 complete.
2023-08-02 16:09:37 +02:00
savagebidoof
c206bb1e5b IDK when but I deleted this file unintentionally. 2023-08-02 15:54:50 +02:00
43 changed files with 1461 additions and 38 deletions

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: external
labels:
istio-injection: "enabled"

View File

@@ -3,6 +3,8 @@ kind: DestinationRule
metadata:
name: filebrowser
namespace: external
labels:
app: filebrowser
spec:
host: filebrowser.external.svc.cluster.local
trafficPolicy:

View File

@@ -3,6 +3,8 @@ kind: ServiceEntry
metadata:
name: filebrowser-se
namespace: external
labels:
app: filebrowser
spec:
hosts:
- filebrowser.external.svc.cluster.local

View File

@@ -3,6 +3,8 @@ kind: VirtualService
metadata:
name: filebrowser-vs
namespace: external
labels:
app: filebrowser
spec:
hosts:
- "filebrowser.filter.home"

View File

@@ -3,6 +3,8 @@ kind: DestinationRule
metadata:
name: gitea
namespace: external
labels:
app: gitea
spec:
host: gitea.external.svc.cluster.local
trafficPolicy:

View File

@@ -3,6 +3,8 @@ kind: ServiceEntry
metadata:
name: gitea-se
namespace: external
labels:
app: gitea
spec:
hosts:
- gitea.external.svc.cluster.local

View File

@@ -3,6 +3,8 @@ kind: VirtualService
metadata:
name: gitea-vs
namespace: external
labels:
app: gitea
spec:
hosts:
- "gitea.filter.home"

View File

@@ -3,6 +3,8 @@ kind: VirtualService
metadata:
name: jelly-vs
namespace: external
labels:
app: jellyfin
spec:
hosts:
- "jelly.filter.home"

View File

@@ -3,6 +3,8 @@ kind: DestinationRule
metadata:
name: tube
namespace: external
labels:
app: tube
spec:
host: tube.external.svc.cluster.local
trafficPolicy:

View File

@@ -3,6 +3,8 @@ kind: ServiceEntry
metadata:
name: tube-se
namespace: external
labels:
app: tube
spec:
hosts:
- tube.external.svc.cluster.local

View File

@@ -3,6 +3,8 @@ kind: VirtualService
metadata:
name: tube-vs
namespace: external
labels:
app: tube
spec:
hosts:
- "tube.filter.home"

View File

@@ -0,0 +1,14 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-config
labels:
last-update: 2023-07-16
spec:
profile: minimal
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
ingressService: istio-public-ingress
ingressSelector: public-ingress

View File

@@ -0,0 +1,21 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: egress
labels:
last-update: 2023-07-16
spec:
profile: empty
components:
egressGateways:
- namespace: istio-system
name: egress-gw
enabled: true
label:
istio: egress-gw
app: istio-egress-gw
k8s:
service:
type: LoadBalancer
loadBalancerIP: 192.168.1.39

View File

@@ -0,0 +1,21 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: local-ingress
labels:
last-update: 2023-07-16
spec:
profile: empty
components:
ingressGateways:
- namespace: istio-system
name: istio-local-ingress
enabled: true
label:
istio: local-ingress
app: istio-local-ingress
k8s:
service:
type: LoadBalancer
loadBalancerIP: 192.168.1.21

View File

@@ -0,0 +1,21 @@
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: public-ingress
labels:
last-update: 2023-07-16
spec:
profile: empty
components:
ingressGateways:
- namespace: istio-system
name: istio-public-ingress
enabled: true
label:
istio: public-ingress
app: istio-public-ingress
k8s:
service:
type: LoadBalancer
loadBalancerIP: 192.168.1.20

View File

@@ -0,0 +1,29 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cherrypick
namespace: metallb-system
spec:
addresses:
- 192.168.1.20-192.168.1.39
autoAssign: false
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: flex
namespace: metallb-system
spec:
addresses:
- 192.168.1.41 - 192.168.1.60
autoAssign: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2-advert
namespace: metallb-system
spec:
ipAddressPools:
- cherrypick
- flex

View File

@@ -0,0 +1,64 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-public
namespace: istio-system
spec:
acme:
# The ACME server URL
# server: https://acme-staging-v02.api.letsencrypt.org/directory # Testing
server: https://acme-v02.api.letsencrypt.org/directory # Prod
# Email address used for ACME registration
email: filter.oriol@gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-public
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: istio
podTemplate:
metadata:
annotations:
sidecar.istio.io/inject: "true"
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: filterhome-domain-cert-public
namespace: istio-system
spec:
secretName: filterhome-domain-cert-public
duration: 720h # 30d
renewBefore: 24h # 1d
# duration: 2160h # 90d
# renewBefore: 360h # 15d
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 4096
rotationPolicy: Always
usages:
- server auth
- client auth
dnsNames:
## - "*.filterhome.xyz"
# Gitea
- "gitea.filterhome.xyz"
# Jellyfin
- "jelly.filterhome.xyz"
# Filebrowser
- "filebrowser.filterhome.xyz"
# Tube
- "tube.filterhome.xyz"
issuerRef:
name: letsencrypt-public
kind: ClusterIssuer
group: cert-manager.io

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: external
labels:
istio-injection: "enabled"

View File

@@ -0,0 +1,15 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: filebrowser
namespace: external
labels:
app: filebrowser
spec:
host: filebrowser.external.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
connectionPool:
http:
h2UpgradePolicy: UPGRADE

View File

@@ -0,0 +1,19 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: filebrowser-se
namespace: external
labels:
app: filebrowser
spec:
hosts:
- filebrowser.external.svc.cluster.local
location: MESH_INTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: NONE
workloadSelector:
labels:
host: srv

View File

@@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: filebrowser-vs
namespace: external
labels:
app: filebrowser
spec:
hosts:
- "filebrowser.filter.home"
- "filebrowser.filterhome.xyz"
- "filebrowser.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
http:
- route:
- destination:
host: filebrowser.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,40 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-to-https-public
namespace: default
spec:
selector:
istio: public-ingress
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
tls:
httpsRedirect: true
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: default
spec:
selector:
istio: public-ingress
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.filterhome.xyz"
- "filterhome.xyz"
# - "filterhome.duckdns.org"
# - "*.filterhome.duckdns.org"
tls:
mode: SIMPLE
credentialName: filterhome-domain-cert-public

View File

@@ -0,0 +1,15 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: gitea
namespace: external
labels:
app: gitea
spec:
host: gitea.external.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
connectionPool:
http:
h2UpgradePolicy: UPGRADE

View File

@@ -0,0 +1,19 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: gitea-se
namespace: external
labels:
app: gitea
spec:
hosts:
- gitea.external.svc.cluster.local
location: MESH_INTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: NONE
workloadSelector:
labels:
host: srv

View File

@@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gitea-vs
namespace: external
labels:
app: gitea
spec:
hosts:
- "gitea.filter.home"
- "gitea.filterhome.xyz"
- "gitea.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
http:
- route:
- destination:
host: gitea.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,9 @@
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadEntry
metadata:
name: srv-host
namespace: external
spec:
address: 192.168.1.3
labels:
host: srv

View File

@@ -0,0 +1,16 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: jelly
namespace: external
labels:
app: jellyfin
spec:
host: jelly.external.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
connectionPool:
http:
h2UpgradePolicy: DO_NOT_UPGRADE
# h2UpgradePolicy: UPGRADE

View File

@@ -0,0 +1,19 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: jelly-se
namespace: external
labels:
app: jellyfin
spec:
hosts:
- jelly.external.svc.cluster.local
location: MESH_INTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: NONE
workloadSelector:
labels:
host: srv

View File

@@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jelly-vs
namespace: external
labels:
app: jellyfin
spec:
hosts:
- "jelly.filter.home"
- "jelly.filterhome.xyz"
- "jelly.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
http:
- route:
- destination:
host: jelly.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,15 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tube
namespace: external
labels:
app: tube
spec:
host: tube.external.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
connectionPool:
http:
h2UpgradePolicy: UPGRADE

View File

@@ -0,0 +1,19 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: tube-se
namespace: external
labels:
app: tube
spec:
hosts:
- tube.external.svc.cluster.local
location: MESH_INTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: NONE
workloadSelector:
labels:
host: srv

View File

@@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tube-vs
namespace: external
labels:
app: tube
spec:
hosts:
- "tube.filter.home"
- "tube.filterhome.xyz"
- "tube.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
http:
- route:
- destination:
host: tube.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,42 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: ca-issuer
namespace: cert-manager
spec:
ca:
secretName: local-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: local-wildcard-certificate
namespace: istio-system
spec:
secretName: domain-cert-local
privateKey:
rotationPolicy: Always
algorithm: RSA
encoding: PKCS1
size: 4096
duration: 720h # 30d
renewBefore: 24h # 1d
subject:
organizations:
- FilterHome
commonName: filterhome
isCA: false
usages:
- server auth
- client auth
dnsNames:
# - demoapi.default
# - demoapi.default.svc
# - demoapi.default.svc.cluster
# - demoapi.default.svc.cluster.local
- "filter.home"
- "*.filter.home"
# - jelly.filter.home
issuerRef:
name: ca-issuer
kind: ClusterIssuer

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: local-ca
namespace: cert-manager
data:
tls.crt:
tls.key:

View File

@@ -0,0 +1,37 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-to-https-local
namespace: default
spec:
selector:
istio: local-ingress
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: local-gateway
namespace: default
spec:
selector:
istio: local-ingress
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "filter.home"
- "*.filter.home"
tls:
mode: SIMPLE
credentialName: domain-cert-local

View File

@@ -0,0 +1,45 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-fihome
namespace: istio-system
spec:
acme:
# ACME Server
# prod : https://acme-v02.api.letsencrypt.org/directory
# staging : https://acme-staging-v02.api.letsencrypt.org/directory
server: https://acme-v02.api.letsencrypt.org/directory
# ACME Email address
email: <redacted>
privateKeySecretRef:
name: letsencrypt-fihome # staging or production
solvers:
- selector:
dnsNames:
- '*.fihome.xyz'
dns01:
webhook:
config:
apiKeySecretRef:
name: fihome-godaddy-api-key
key: key
secret: secret
production: true
ttl: 600
groupName: acme.fihome.xyz
solverName: godaddy
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: fihome-domain-cert-public
namespace: istio-system
spec:
secretName: fihome-domain-cert-public
duration: 720h # 30d
renewBefore: 168h # 1d
dnsNames:
- '*.fihome.xyz'
issuerRef:
name: letsencrypt-fihome
kind: ClusterIssuer

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: fihome-godaddy-api-key
namespace: cert-manager
type: Opaque
data:
key: <redacted>
secret: <redacted>

View File

@@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: filebrowser-vs
namespace: external
labels:
app: filebrowser
spec:
hosts:
- "filebrowser.filter.home"
- "filebrowser.filterhome.xyz"
- "filebrowser.fihome.xyz"
- "filebrowser.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: filebrowser.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,20 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: fihome-gateway
namespace: default
spec:
selector:
istio: public-ingress
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.fihome.xyz"
- "fihome.xyz"
tls:
mode: SIMPLE
credentialName: fihome-cert

View File

@@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gitea-vs
namespace: external
labels:
app: gitea
spec:
hosts:
- "gitea.filter.home"
- "gitea.filterhome.xyz"
- "gitea.fihome.xyz"
- "gitea.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: gitea.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jelly-vs
namespace: external
labels:
app: jellyfin
spec:
hosts:
- "jelly.filter.home"
- "jelly.filterhome.xyz"
- "jelly.filterhome.duckdns.org"
- "jelly.fihome.xyz"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: jelly.external.svc.cluster.local
port:
number: 443

View File

@@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tube-vs
namespace: external
labels:
app: tube
spec:
hosts:
- "tube.filter.home"
- "tube.filterhome.xyz"
- "tube.filterhome.duckdns.org"
- "tube.fihome.xyz"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: tube.external.svc.cluster.local
port:
number: 443

View File

@@ -141,7 +141,7 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
- [x] Deploy an Ingress LB for local thingies.
> **Note:**\
> **Note:**
> - https://istio.io/latest/docs/tasks/traffic-management/egress/
> - https://istio.io/latest/docs/tasks/traffic-management/egress/egress-kubernetes-services/
> - https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/
@@ -167,7 +167,7 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
- [x] Deploy configurations to route some services through the Istio `istio-local-ingress` Load Balancer deployed.
> **Note**:\
> Regarding Let's Encrypt certificate provisioning, for testing purposes the `staging` environment should be used, nevertheless on my scenario I am running directly on the production environment, why?\
> Regarding Let's Encrypt certificate provisioning, for testing purposes the `staging` environment should be used, nevertheless on my scenario I am running directly on the production environment, why?
> - `Staging` and `Production` behave different, therefore one can get the certificates verified on `Staging` and not on `Production`.
> - I ran into some issues regarding the sentence from above, so there was some back and forth, this topic is mentioned [here at the end.](#2x1-able-to-get-staging-le-certs-but-not-the-production-one-when-using-custom-istio-selector-and-only-able-to-get-production-le-certs-when-using-the-default-istio-ingressgateway-selector)
> - Since there was "back and forth" I sort of cheated and set this as it is.
@@ -182,7 +182,7 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
### Part 4
> Completed 27/July/2023\
> Completed 27/July/2023
- [x] Deploy locally a Certificate Authorization Service (on the SRV host.)
@@ -208,6 +208,8 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
### Part 5
> Completed 01/August/2023
- [x] Explore Pi4 Storage options.
- [x] Consider Storage options for the OrangePi5.
@@ -218,8 +220,10 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
### Part 6
> Completed 02/August/2023
- [x] ~~Wipe~~ (**don't wipe** just use a different drive) and recreate the current `Kluster`, this time using the Pi4 as a _master_, and the 2 Orange Pi5 as _slaves_ (this will require updating the DNS/DHCP local services).
- [ ] Deploy all the services from the previous Kubernetes cluster to the new one.
- [x] Deploy all the services from the previous Kubernetes cluster to the new one.
> **Note**:\
> I can make a new cluster on the Pi4, and remove the taint that prevents from scheduling pods on that node. Deploy everything inside (a well a LB with the same exact IP than the current one, and proceed to stop the Orange PI 5), then "reformat" the OPi5s with a new distro, install stuff etc., and join them to the cluster running on the Pi4.
@@ -230,18 +234,27 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
### Part 7
- [ ] Remove the last host from the old kubernetes cluster and join it to the new cluster.
> Completed 02/August/2023
- [x] Remove the last host from the old kubernetes cluster and join it to the new cluster.
### Part 8
- [ ] Deploy NFS service on the `media SRV` host.
- [x] Set wildcards certificates through `ACME DNS01` challenge.
### Part 9
- [ ] Deploy Istio security.
- [x] Deploy NFS service(s) on the `media SRV` host.
### Part 10
- [ ] Deploy Istio security.
> **Note:**\
> If there is barely workloads, don't think that it's wise to deploy a restrictive security settings as per the moment.
### Part 11
- [ ] Update the `Current Setup` documentation with the new container and architecture rearrangement.
- [ ] Migrate some lightweight/not data heavy services from the `media SRV` to the `Kluster`.
@@ -258,17 +271,14 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
- Run the old migrated services back on the `media SRV` host.
### Part 11
### TMP Notes
- Set wildcards certificates through `ACME DNS01` challenge.
### Extras?
#### Horizontal Pod Autoscaling for the Istio LBs.
- https://github.com/joohoi/acme-dns
# Execution
## Part 1
### Transfer local network dependencies services from Pi4 to SRV.
#### Install ZIP on `Pi4`
@@ -1160,7 +1170,7 @@ I updated the Local DNS to point towards the new-architecture/new-resources.
First, let's generate a certificate, and it's key.
```shell
openssl req -x509 -newkey rsa:4096 -sha256 -days 5 -nodes \
openssl req -x509 -newkey rsa:4096 -sha256 -days 365 -nodes \
-keyout ca.filter.home.key -out ca.filter.home.cer \
-subj /C=ES/ST=BAR/O=FilterHome/CN=ca.filter.home \
-extensions ext \
@@ -1291,7 +1301,7 @@ x-envoy-upstream-service-time: 2
> It's extremely possible that I set HTTP to HTTPS redirect also locally, still need to decide if there is any reason for which I would like to maintain the local HTTP traffic.
### Part 5
## Part 5
I decided to use a SSD with the Pi4 through a USB3.0 connector.
@@ -1299,7 +1309,7 @@ On the Orange Pi5 I intend to use an m.2 that I bought for it, yet need to try i
Therefore, I will remove 1 node from the current Kubernetes cluster and try there the m.2 that I got.
#### Remove 1 OrangePI Node
### Remove 1 OrangePI Node
https://stackoverflow.com/questions/35757620/how-to-gracefully-remove-a-node-from-kubernetes
@@ -1353,7 +1363,7 @@ error when evicting pods/"istio-public-ingress-5bb994c8b7-j9hgr" -n "istio-syste
Well, it's on it, so let's open a new shell.
#### Cannot evict pod as it would violate the pod's disruption budget.
### Cannot evict pod as it would violate the pod's disruption budget.
Let's delete the "remaining" pods manually.
@@ -1430,7 +1440,7 @@ kubectl delete node slave01.filter.home
node "slave01.filter.home" deleted
```
### Part 6
## Part 6
Currently, the resources whereas:
@@ -1449,17 +1459,17 @@ I will change it to
```
#### Prepare NVME images etc.
### Prepare NVME images etc.
Did the needful.
#### Set up cluster
### Set up cluster
Used the following ansible script to set up things arround.
https://gitea.filterhome.xyz/ofilter/ansible_kubernetes_cluster
##### run.sh
#### run.sh
<pre><span style="color:#FF7F7F"><b>✗</b></span> ./run.sh
@@ -1755,19 +1765,8 @@ PLAY RECAP *********************************************************************
<span style="color:#CC3980">slave02.filter.home</span> : <span style="color:#7F3FBF">ok=12 </span> <span style="color:#CC3980">changed=23 </span> unreachable=0 failed=0 <span style="color:#7f7fff">skipped=12 </span> rescued=0 ignored=0
</pre>
#### Check cluster status
### Check cluster status
First I will copy the kubeconfig file to a place of my own.
```shell
cp ksetup/Exported/kubeconfig.conf ~/kubeconfig.conf -v
```
```text
'ksetup/Exported/kubeconfig.conf' -> '/home/savagebidoof/kubeconfig.conf'
```
Pods are deployed correctly
```shell
kubectl get pods --kubeconfig ~/kubeconfig.conf -A -owide
@@ -1791,19 +1790,718 @@ metallb-system speaker-5zptn 1/1 Running 2
metallb-system speaker-whw4n 1/1 Running 2 (22m ago) 26m 192.168.1.11 slave02.filter.home <none> <none>
```
### Kubeconfig
#### Backup `Kubeconfig` file
I will back up `kubeconfig.conf` file to a directory of my own.
```shell
cp ksetup/Exported/kubeconfig.conf ~/kubeconfig.conf -v
```
```text
'ksetup/Exported/kubeconfig.conf' -> '/home/savagebidoof/kubeconfig.conf'
```
Pods are deployed correctly
#### Configure new `Kubeconfig` location
```shell
export KUBECONFIG="/home/savagebidoof/kubeconfig.conf"
```
#### Confirm `Kubeconfig` is selected properly
```shell
kubectl get nodes
```
```text
NAME STATUS ROLES AGE VERSION
pi4.filter.home Ready control-plane 18h v1.27.4
slave02.filter.home Ready <none> 17h v1.27.4
```
### Move workloads
Well it's time to move everything over.
During the couple of days I have been waiting until I get the NVME I ordered, considered changing the IP from the deployed LBs.
Instead of using:
```yaml
.80 -> public LB
.81 -> local LB
.90 -> egress LB
```
I will be using
```yaml
.20 -> public LB
.21 -> local LB
.39 -> egress LB
```
#### Deploy lacking CRDs
I already have `MetalLB` and `Calico` installed.
I lack Cert-manager CRDs.
```shell
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
```
```text
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
...
```
#### MetalLB Config
I am setting up 2 Address Pools.
`cherrypick` for the things that I want to give IPs with my finger.
```yaml
kind: IPAddressPool
...
name: cherrypick
- 192.168.1.20-192.168.1.39
...
autoAssign: false
```
And `flex` to assign IP dynamically.
```yaml
kind: IPAddressPool
...
name: flex
- 192.168.1.41-192.168.1.60
...
autoAssign: true
```
##### Deploy MetalLB configuration
```shell
kubectl create -f P6_Redeployment/MetalLB.yaml
```
# I am here <----
```text
ipaddresspool.metallb.io/cherrypick created
ipaddresspool.metallb.io/flex created
l2advertisement.metallb.io/l2-advert created
```
## Should add labels to the SE resources etc.
#### Deploy Istio Config and Load Balancers
##### IstioOperator_IstioConfig.yaml
```shell
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioConfig.yaml
```
<pre>This will install the Istio 1.18.2 minimal profile with [&quot;Istio core&quot; &quot;Istiod&quot;] components into the cluster. Proceed? (y/N) y
<span style="color:#7F3FBF">✔</span> Istio core installed
<span style="color:#7F3FBF">✔</span> Istiod installed
<span style="color:#7F3FBF">✔</span> Installation complete
Making this installation the default for injection and validation.</pre>
##### IstioOperator_IstioEgress.yaml
```shell
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_IstioEgress.yaml
```
<pre><span style="color:#7F3FBF">✔</span> Egress gateways installed
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
> **Note**:\
> The egress resource doesn't work "right off the bat", requires some configurations regarding this matter, don't think this will be done on this "walkthrough".
##### IstioOperator_LocalIngress.yaml
```shell
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_LocalIngress.yaml
```
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
<span style="color:#7F3FBF">✔</span> Installation complete</pre>
##### IstioOperator_PublicIngress.yaml
```shell
istioctl install -y -f P6_Redeployment/Istio_Config/IstioOperator_PublicIngress.yaml
```
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
##### Check Service IP provisioning
```shell
kubectl get svc -n istio-system | grep LoadBalancer
```
<pre>egress-gw <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.106.41.20 192.168.1.39 80:31322/TCP,443:30559/TCP 138m
istio-local-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.97.14.59 192.168.1.21 15021:30005/TCP,80:30168/TCP,443:32103/TCP 50m
istio-public-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.100.53.247 192.168.1.20 15021:31249/TCP,80:30427/TCP,443:30411/TCP 50m</pre>
### Prepare Secrets
#### Placeholder folder for God knows what
```shell
mkdir tmp
```
#### Local Certs
```shell
openssl req -x509 -newkey rsa:4096 -sha512 -days 365 -nodes \
-keyout tmp/ca.filter.home.key -out tmp/ca.filter.home.cer \
-subj /C=ES/ST=BAR/O=FilterHome/CN=ca.filter.home \
-extensions ext \
-config <(cat <<EOF
[req]
distinguished_name=req
[ext]
keyUsage=critical,keyCertSign,cRLSign
basicConstraints=critical,CA:true,pathlen:1
subjectAltName=DNS:ca.filter.home
EOF
)
```
```shell
cmdsubst heredoc> [req]
cmdsubst heredoc> distinguished_name=req
cmdsubst heredoc> [ext]
cmdsubst heredoc> keyUsage=critical,keyCertSign,cRLSign
cmdsubst heredoc> basicConstraints=critical,CA:true,pathlen:1
cmdsubst heredoc> subjectAltName=DNS:ca.filter.home
cmdsubst heredoc> EOF
cmdsubst> )
```
Export `.key` and `.cer` base64
```shell
cat tmp/ca.filter.home.cer | base64 | tr -d '\n'
```
```shell
cat tmp/ca.filter.home.key | base64 | tr -d '\n'
```
Add the base64 outputs to the secrets file `P6_Redeployment/non_Istio_Config/Local_Certs/Secret.yaml`
```shell
nano P6_Redeployment/non_Istio_Config/Local_Certs/Secret.yaml
```
#### Modify Public Certificate duration
Changed the duration to something more "reasonable".
```shell
nano P6_Redeployment/non_Istio_Config/Certificate_Manager/Issuer.yaml
```
#### Modify Local Certificate duration
Changed the duration to something more "reasonable".
```shell
nano P6_Redeployment/non_Istio_Config/Local_CA/Issuer.yaml
```
##### Set up HTTP to HTTPS in the local gateway.
Added the following Gateway and modified the previous existing to remove `HTTP` access through the port 80.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-to-https-local
namespace: default
spec:
selector:
istio: public-ingress
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
```
#### Deploy EVERYTHING not IstioOperator
Well it's time to press the create button, this will pop up some text, just make sure everything sates as "created".
```shell
kubectl create -f P6_Redeployment/non_Istio_Config -R
```
```text
clusterissuer.cert-manager.io/letsencrypt-public created
certificate.cert-manager.io/filterhome-domain-cert-public created
namespace/external created
destinationrule.networking.istio.io/filebrowser created
serviceentry.networking.istio.io/filebrowser-se created
virtualservice.networking.istio.io/filebrowser-vs created
gateway.networking.istio.io/http-to-https-public created
gateway.networking.istio.io/public-gateway created
destinationrule.networking.istio.io/gitea created
serviceentry.networking.istio.io/gitea-se created
virtualservice.networking.istio.io/gitea-vs created
workloadentry.networking.istio.io/srv-host created
destinationrule.networking.istio.io/jelly created
serviceentry.networking.istio.io/jelly-se created
virtualservice.networking.istio.io/jelly-vs created
destinationrule.networking.istio.io/tube created
serviceentry.networking.istio.io/tube-se created
virtualservice.networking.istio.io/tube-vs created
clusterissuer.cert-manager.io/ca-issuer created
certificate.cert-manager.io/local-wildcard-certificate created
secret/local-ca created
gateway.networking.istio.io/http-to-https-local created
gateway.networking.istio.io/local-gateway created
```
#### Remove ./tmp folder
We no longer need the folder ./tmp, therefore we can delete it.
```shell
rm ./tmp/* -v
```
```text
zsh: sure you want to delete all 4 files in /home/savagebidoof/IdeaProjects/home_shit/Migrations/Forget_Traefik_2023/./tmp [yn]? y
removed './tmp/ca.filter.home.cer'
removed './tmp/ca.filter.home.key'
```
#### Update the Router to point towards the "new" Ingress Load Balancer
Changed from `192.168.1.80` to `192.168.1.20`.
#### Update Local DNS
I did the needful.
#### Monitor Public Cert Provisioning
```shell
kubectl get events -n istio-system --field-selector involvedObject.name=filterhome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
```
```text
LAST SEEN TYPE REASON OBJECT MESSAGE
13m Normal Issuing certificate/filterhome-domain-cert-public Issuing certificate as Secret does not exist
12m Normal Generated certificate/filterhome-domain-cert-public Stored new private key in temporary Secret resource "filterhome-domain-cert-public-2vdxk"
12m Normal Requested certificate/filterhome-domain-cert-public Created new CertificateRequest resource "filterhome-domain-cert-public-js69j"
8m46s Normal Issuing certificate/filterhome-domain-cert-public Issuing certificate as Secret does not exist
8m35s Normal Generated certificate/filterhome-domain-cert-public Stored new private key in temporary Secret resource "filterhome-domain-cert-public-n8w8s"
8m35s Normal Requested certificate/filterhome-domain-cert-public Created new CertificateRequest resource "filterhome-domain-cert-public-cb8ws"
103s Normal Issuing certificate/filterhome-domain-cert-public The certificate has been successfully issued
```
## Part 7
I did set up the NVME with the OS etc.
### Set up cluster
Used the following ansible script join the host to the Kubernetes cluster, all I had to do was to populate the `Exported/kubeadm-join.command`.
https://gitea.filterhome.xyz/ofilter/ansible_kubernetes_cluster
## Difficulties
<pre><span style="color:#FF7F7F"><b>✗</b></span> ./run.sh
PLAY [Pre Setup] *************************************************************************************************************************************************************************************************************************
TASK [apt update] ************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [apt upgrade] ***********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Install sudo] **********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [update facts] **********************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Make sure we have a &apos;wheel&apos; group] *************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Allow &apos;wheel&apos; group to have passwordless sudo] *************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Add user new_user_name] ************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">[DEPRECATION WARNING]: Encryption using the Python crypt module is deprecated. The Python crypt module is deprecated and will be removed from Python 3.13. Install the passlib library for continued encryption functionality. This </span>
<span style="color:#7F3FBF">feature will be removed in version 2.17. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.</span>
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [adding user &apos;klussy&apos; to group wheel] ***********************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Select new user] *******************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Change root default password] ******************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">[DEPRECATION WARNING]: Encryption using the Python crypt module is deprecated. The Python crypt module is deprecated and will be removed from Python 3.13. Install the passlib library for continued encryption functionality. This </span>
<span style="color:#7F3FBF">feature will be removed in version 2.17. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.</span>
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [PermitRootLogin = no] **************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [PermitEmptyPasswords = no] *********************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Ensure localisation files for &apos;en_US.UTF-8&apos; are available] *************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Ensure localisation files for &apos;en_US.UTF-8&apos; are available] *************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Get current locale and language configuration] *************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Configure locale to &apos;en_US.UTF-8&apos; and language to &apos;en_US.UTF-8&apos;] *******************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [reboot] ****************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">slave01.filter.home</span> : <span style="color:#7F3FBF">ok=10 </span> <span style="color:#CC3980">changed=11 </span> unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 </pre>
<pre>PLAY [Preparethings] *********************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [debug] *****************************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home] =&gt; {</span>
<span style="color:#7F3FBF"> &quot;set_hostname&quot;: &quot;slave01.filter.home&quot;</span>
<span style="color:#7F3FBF">}</span>
TASK [debug] *****************************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home] =&gt; {</span>
<span style="color:#7F3FBF"> &quot;is_master&quot;: false</span>
<span style="color:#7F3FBF">}</span>
TASK [Set a hostname] ********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Swapoff] ***************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Disable ram on boot (orangepi)] ****************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Disable ram on boot (armbian)] *****************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [apt prune containerd] **************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [apt update] ************************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [apt upgrade] ***********************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [apt install gnupg] *****************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Creating a new directory] **********************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Add Docker GPG key] ****************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Add Docker APT repository] *********************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Add Kubernetes GPG key] ************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Add Kubernetes APT repository] *****************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [apt update] ************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Install Kubelet Kubeadm Kubectl] ***************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Hold kubeadm] **********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Hold kubelet] **********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Hold kubectl] **********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Install Container Runtime] *********************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Containerd set default config] *****************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [SystemdCgroup = true] **************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Iptables thingies (not touching specific firewall rules.)] *************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Iptables thingies] *****************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Add the overlay module] ************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Add the br_netfilter module] *******************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Apply changes (might need to use sysctl module with the reload flag, will try eventually)] *****************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Enable kubelet] ********************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Enable containerd] *****************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [Enable kubelet] ********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Restart containerd] ****************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Set /etc/hosts file content (template/base)] ***************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Init cluster] **********************************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Export remote kubeconfig file] *****************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Remove Taint (allows deployment in control plane node)] ****************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Calico] ****************************************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [MetalLB] ***************************************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Generate join token] ***************************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [set_fact] **************************************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [debug] *****************************************************************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Store join command in &quot;./Exported/kubeadm-join.command&quot;] ***************************************************************************************************************************************************************************
<span style="color:#7F7FFF">skipping: [slave01.filter.home]</span>
TASK [Populate] **************************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home]</span>
TASK [debug] *****************************************************************************************************************************************************************************************************************************
<span style="color:#7F3FBF">ok: [slave01.filter.home] =&gt; {</span>
<span style="color:#7F3FBF"> &quot;_kubeadm_join_command&quot;: &quot;kubeadm join 192.168.1.9:6443 --token fjw9iy.0v1vim764ls5mpyp --discovery-token-ca-cert-hash sha256:028116b6076078c15bd4212b8a96ecc159507a07df11d7f9859b2593533616c3&quot;</span>
<span style="color:#7F3FBF">}</span>
TASK [Join kubeadm] **********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [Delete user] ***********************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
TASK [reboot] ****************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">changed: [slave01.filter.home]</span>
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
<span style="color:#CC3980">slave01.filter.home</span> : <span style="color:#7F3FBF">ok=38 </span> <span style="color:#CC3980">changed=25 </span> unreachable=0 failed=0 <span style="color:#7F7FFF">skipped=10 </span> rescued=0 ignored=0</pre>
#### Check Cluster node list
The node was added correctly.
```shell
kubectl get nodes
```
```text
NAME STATUS ROLES AGE VERSION
pi4.filter.home Ready control-plane 22h v1.27.4
slave01.filter.home Ready <none> 78s v1.27.4
slave02.filter.home Ready <none> 21h v1.27.4
```
## Part 8
I bought the domain `fihome.xyz`.
### Install helm
I did follow the process documented from [here](https://helm.sh/docs/intro/install/)
Also, didn't saved the output, but the process it's fairly simple soo ... GL!
### Update the other reverse proxies to accept ingress from the domain `fihome.xyz`
On my scenario I had to update the reverse proxy from the host `srv` to allow from that domain.
As well had to modify the services that I am intented to serve publicly.
### Install webhook
The repository I will be using is:
https://github.com/snowdrop/godaddy-webhook
Following the installation steps listed in [here](https://github.com/snowdrop/godaddy-webhook#the-godaddy-webhook).
```shell
helm repo add godaddy-webhook https://fred78290.github.io/cert-manager-webhook-godaddy/
```
```text
"godaddy-webhook" has been added to your repositories
```
```shell
helm repo update
```
```text
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "godaddy-webhook" chart repository
...Successfully got an update from the "nfs-subdir-external-provisioner" chart repository
Update Complete. ⎈Happy Helming!⎈
```
```shell
helm upgrade -i godaddy-webhook godaddy-webhook/godaddy-webhook \
--set groupName=acme.fihome.xyz \
--set image.tag=v1.27.2 \
--set image.pullPolicy=Always \
--namespace cert-manager
```
### Update VirtualServices and deploy an Istio Gateway config
This includes adding the `fihome.xyz` domain to the VirtualService resources, and adding a TLS entry for the respective `Istio Gateway` configuration.
```shell
kubectl apply -f P8_Ingress -R
```
```text
gateway.networking.istio.io/fihome-gateway created
virtualservice.networking.istio.io/filebrowser-vs configured
virtualservice.networking.istio.io/gitea-vs configured
virtualservice.networking.istio.io/jelly-vs configured
virtualservice.networking.istio.io/tube-vs configured
```
### Deploy cert-manager config for the secondary domain
```shell
kubectl apply -f P8_Fihome_Cert -R
```
```text
clusterissuer.cert-manager.io/letsencrypt-fihome created
certificate.cert-manager.io/fihome-domain-cert-public created
secret/fihome-godaddy-api-key created
```
### Monitor the certificate provisioning
#### Check Certificate Issuer status
Well, it was very fast on my scenario, but since we are doing a DNS challenge, it could take some time, specially if we required to wait for the DNS propagation, which, as a rule of thumb, can take up to 72h.
```shell
kubectl get -n istio-system certificate fihome-domain-cert-public -o jsonpath='{.metadata.name}{"\t"}{.status.conditions[].reason}{"\t"}{.status.conditions[].message}{"\n"}'
```
```text
fihome-domain-cert-public Ready Certificate is up to date and has not expired
```
#### Certificate logs
```shell
kubectl get events -n istio-system --field-selector involvedObject.name=fihome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
```
```text
LAST SEEN TYPE REASON OBJECT MESSAGE
64s Normal Issuing certificate/fihome-domain-cert-public Issuing certificate as Secret does not exist
64s Normal Generated certificate/fihome-domain-cert-public Stored new private key in temporary Secret resource "fihome-domain-cert-public-wz9hv"
64s Normal Requested certificate/fihome-domain-cert-public Created new CertificateRequest resource "fihome-domain-cert-public-1"
61s Normal Issuing certificate/fihome-domain-cert-public The certificate has been successfully issued
```
## Part 9
Well, I did not document this either, yet I did deploy the following NFS provider from this repo:
- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
So far it's been working correctly, can't say I tested much from it, the services I am running are working correctly so, so far so good.
## Part 10
# Difficulties
The Certificate Manager must be located at the same namespace as the `istiod` service, without the LB location being taken into account.
## no healthy upstream
During the "testing of configurations", I created a the `DestinationRule` for the Jelly on the namespace `default`, instead of `external`.
@@ -1936,7 +2634,7 @@ Let's check the "access logs" differences when using one selector or another.
#### Custom
```shell
kubectl get logs -l istio=public-ingress -n istio-system
kubectl logs -l istio=public-ingress -n istio-system
```
```text