Part 8 and 9 documented (even tho part 9 was just a link to the repo used ... )

This commit is contained in:
savagebidoof 2023-10-20 20:30:25 +02:00
parent cd7fdbdd16
commit efcb916c9f
7 changed files with 243 additions and 9 deletions

View File

@ -5,7 +5,7 @@ metadata:
namespace: default
spec:
selector:
istio: public-ingress
istio: local-ingress
servers:
- port:
number: 80

View File

@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: filebrowser-vs
namespace: external
labels:
app: filebrowser
spec:
hosts:
- "filebrowser.filter.home"
- "filebrowser.filterhome.xyz"
- "filebrowser.fihome.xyz"
- "filebrowser.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: filebrowser.external.svc.cluster.local
port:
number: 443

View File

@ -0,0 +1,20 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: fihome-gateway
namespace: default
spec:
selector:
istio: public-ingress
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.fihome.xyz"
- "fihome.xyz"
tls:
mode: SIMPLE
credentialName: fihome-cert

View File

@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gitea-vs
namespace: external
labels:
app: gitea
spec:
hosts:
- "gitea.filter.home"
- "gitea.filterhome.xyz"
- "gitea.fihome.xyz"
- "gitea.filterhome.duckdns.org"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: gitea.external.svc.cluster.local
port:
number: 443

View File

@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jelly-vs
namespace: external
labels:
app: jellyfin
spec:
hosts:
- "jelly.filter.home"
- "jelly.filterhome.xyz"
- "jelly.filterhome.duckdns.org"
- "jelly.fihome.xyz"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: jelly.external.svc.cluster.local
port:
number: 443

View File

@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tube-vs
namespace: external
labels:
app: tube
spec:
hosts:
- "tube.filter.home"
- "tube.filterhome.xyz"
- "tube.filterhome.duckdns.org"
- "tube.fihome.xyz"
gateways:
- default/public-gateway
- default/local-gateway
- default/fihome-gateway
http:
- route:
- destination:
host: tube.external.svc.cluster.local
port:
number: 443

View File

@ -240,16 +240,20 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
### Part 8
- [ ] Deploy NFS service on the `media SRV` host.
- [x] Set wildcards certificates through `ACME DNS01` challenge.
### Part 9
- [x] Deploy NFS service(s) on the `media SRV` host.
### Part 10
- [ ] Deploy Istio security.
> **Note:**\
> If there is barely workloads, don't think that it's wise to deploy a restrictive security settings as per the moment.
### Part 10
### Part 11
- [ ] Update the `Current Setup` documentation with the new container and architecture rearrangement.
@ -267,9 +271,9 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
- Run the old migrated services back on the `media SRV` host.
### Part 11
### TMP Notes
- Set wildcards certificates through `ACME DNS01` challenge.
- https://github.com/joohoi/acme-dns
# Execution
@ -1166,7 +1170,7 @@ I updated the Local DNS to point towards the new-architecture/new-resources.
First, let's generate a certificate, and it's key.
```shell
openssl req -x509 -newkey rsa:4096 -sha256 -days 5 -nodes \
openssl req -x509 -newkey rsa:4096 -sha256 -days 365 -nodes \
-keyout ca.filter.home.key -out ca.filter.home.cer \
-subj /C=ES/ST=BAR/O=FilterHome/CN=ca.filter.home \
-extensions ext \
@ -2372,14 +2376,132 @@ slave01.filter.home Ready <none> 78s v1.27.4
slave02.filter.home Ready <none> 21h v1.27.4
```
## Part 8
I bought the domain `fihome.xyz`.
### Install helm
I did follow the process documented from [here](https://helm.sh/docs/intro/install/)
Also, didn't saved the output, but the process it's fairly simple soo ... GL!
### Update the other reverse proxies to accept ingress from the domain `fihome.xyz`
On my scenario I had to update the reverse proxy from the host `srv` to allow from that domain.
As well had to modify the services that I am intented to serve publicly.
### Install webhook
The repository I will be using is:
https://github.com/snowdrop/godaddy-webhook
Following the installation steps listed in [here](https://github.com/snowdrop/godaddy-webhook#the-godaddy-webhook).
```shell
helm repo add godaddy-webhook https://fred78290.github.io/cert-manager-webhook-godaddy/
```
```text
"godaddy-webhook" has been added to your repositories
```
```shell
helm repo update
```
```text
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "godaddy-webhook" chart repository
...Successfully got an update from the "nfs-subdir-external-provisioner" chart repository
Update Complete. ⎈Happy Helming!⎈
```
```shell
helm upgrade -i godaddy-webhook godaddy-webhook/godaddy-webhook \
--set groupName=acme.fihome.xyz \
--set image.tag=v1.27.2 \
--set image.pullPolicy=Always \
--namespace cert-manager
```
### Update VirtualServices and deploy an Istio Gateway config
This includes adding the `fihome.xyz` domain to the VirtualService resources, and adding a TLS entry for the respective `Istio Gateway` configuration.
```shell
kubectl apply -f P8_Ingress -R
```
```text
gateway.networking.istio.io/fihome-gateway created
virtualservice.networking.istio.io/filebrowser-vs configured
virtualservice.networking.istio.io/gitea-vs configured
virtualservice.networking.istio.io/jelly-vs configured
virtualservice.networking.istio.io/tube-vs configured
```
### Deploy cert-manager config for the secondary domain
```shell
kubectl apply -f P8_Fihome_Cert -R
```
```text
clusterissuer.cert-manager.io/letsencrypt-fihome created
certificate.cert-manager.io/fihome-domain-cert-public created
secret/fihome-godaddy-api-key created
```
### Monitor the certificate provisioning
#### Check Certificate Issuer status
Well, it was very fast on my scenario, but since we are doing a DNS challenge, it could take some time, specially if we required to wait for the DNS propagation, which, as a rule of thumb, can take up to 72h.
```shell
kubectl get -n istio-system certificate fihome-domain-cert-public -o jsonpath='{.metadata.name}{"\t"}{.status.conditions[].reason}{"\t"}{.status.conditions[].message}{"\n"}'
```
```text
fihome-domain-cert-public Ready Certificate is up to date and has not expired
```
#### Certificate logs
```shell
kubectl get events -n istio-system --field-selector involvedObject.name=fihome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
```
```text
LAST SEEN TYPE REASON OBJECT MESSAGE
64s Normal Issuing certificate/fihome-domain-cert-public Issuing certificate as Secret does not exist
64s Normal Generated certificate/fihome-domain-cert-public Stored new private key in temporary Secret resource "fihome-domain-cert-public-wz9hv"
64s Normal Requested certificate/fihome-domain-cert-public Created new CertificateRequest resource "fihome-domain-cert-public-1"
61s Normal Issuing certificate/fihome-domain-cert-public The certificate has been successfully issued
```
## Part 9
Well, I did not document this either, yet I did deploy the following NFS provider from this repo:
- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
So far it's been working correctly, can't say I tested much from it, the services I am running are working correctly so, so far so good.
## Part 10
## Difficulties
# Difficulties
The Certificate Manager must be located at the same namespace as the `istiod` service, without the LB location being taken into account.
## no healthy upstream
During the "testing of configurations", I created a the `DestinationRule` for the Jelly on the namespace `default`, instead of `external`.
@ -2512,7 +2634,7 @@ Let's check the "access logs" differences when using one selector or another.
#### Custom
```shell
kubectl get logs -l istio=public-ingress -n istio-system
kubectl logs -l istio=public-ingress -n istio-system
```
```text