|
|
|
@ -50,6 +50,7 @@ Slave01: 192.168.1.11
|
|
|
|
|
- Normalize `arm7` running distribution -> choose one OS that that works on all devices.
|
|
|
|
|
|
|
|
|
|
- New `Gitea` account with limited permissions for CI/CD.
|
|
|
|
|
- Istio Load Balancer deployed must maintain a static IP.
|
|
|
|
|
|
|
|
|
|
## Dependencies
|
|
|
|
|
|
|
|
|
@ -131,60 +132,56 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
|
|
|
|
|
[//]: # (> Finished /July/2023)
|
|
|
|
|
|
|
|
|
|
> Completed 24/July/2023
|
|
|
|
|
> "Completed" 24/July/2023\
|
|
|
|
|
> Last update 26/July/2023
|
|
|
|
|
|
|
|
|
|
- [x] Deploy an Istio Ingress Gateway to allow access to the "core" services.
|
|
|
|
|
|
|
|
|
|
- [x] Deploy an Istio Egress Gateway to allow egress towards the "core" local services.
|
|
|
|
|
|
|
|
|
|
- [x] Deploy an Ingress LB for local thingies.
|
|
|
|
|
|
|
|
|
|
> **Note:**\
|
|
|
|
|
> https://istio.io/latest/docs/tasks/traffic-management/egress/ \
|
|
|
|
|
> https://istio.io/latest/docs/tasks/traffic-management/egress/egress-kubernetes-services/ \
|
|
|
|
|
> https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/ \
|
|
|
|
|
> https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig \
|
|
|
|
|
> https://istio.io/latest/docs/setup/additional-setup/config-profiles/ \
|
|
|
|
|
> https://istio.io/latest/docs/setup/additional-setup/customize-installation/
|
|
|
|
|
|
|
|
|
|
### Part 3
|
|
|
|
|
|
|
|
|
|
> Completed 24/July/2023
|
|
|
|
|
> Completed 24/July/2023\
|
|
|
|
|
> Last update 26/July/2023
|
|
|
|
|
|
|
|
|
|
- [x] Configure Istio to route traffic towards Jellyfin and Filebrowser services.
|
|
|
|
|
- [x] Configure the public Istio Ingress to route traffic towards Gitea, Jellyfin, Filebrowser and Tube services.
|
|
|
|
|
|
|
|
|
|
- [x] Backup the Istio/Kubernetes configurations used.
|
|
|
|
|
|
|
|
|
|
#### Rollback plan
|
|
|
|
|
|
|
|
|
|
- As much, delete the deployed configurations.
|
|
|
|
|
|
|
|
|
|
### Part 4
|
|
|
|
|
|
|
|
|
|
> Completed 24/July/2023
|
|
|
|
|
|
|
|
|
|
- [x] Deploy Certificate Manager on the current `Kluster` with let's encrypt certificate provisioning.
|
|
|
|
|
- [x] Deploy Certificate Manager on the current `Kluster` with `Let's Encrypt` certificate provisioning.
|
|
|
|
|
|
|
|
|
|
- [x] Update the router `NAT - Virtual Servers` to set the Istio Ingress deployed as the new Public Ingress.
|
|
|
|
|
|
|
|
|
|
- [x] Ensure the Certificates are being provisioned.
|
|
|
|
|
|
|
|
|
|
- [x] Backup the Istio/Kubernetes configuration files used so far.
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
|
> - https://istio.io/latest/docs/setup/additional-setup/config-profiles/
|
|
|
|
|
> - https://istio.io/latest/docs/setup/additional-setup/customize-installation/
|
|
|
|
|
> Regarding Let's Encrypt certificate provisioning, for testing purposes the `staging` environment should be used, nevertheless on my scenario I am running directly on the production environment, why?\
|
|
|
|
|
> - `Staging` and `Production` behave different, therefore one can get the certificates verified on `Staging` and not on `Production`.
|
|
|
|
|
> - I ran into some issues regarding the sentence from above, so there was some back and forth, this topic is mentioned [here at the end.](#2x1-able-to-get-staging-le-certs-but-not-the-production-one-when-using-custom-istio-selector-and-only-able-to-get-production-le-certs-when-using-the-default-istio-ingressgateway-selector)
|
|
|
|
|
> - Since there was "back and forth" I sort of cheated and set this as it is.
|
|
|
|
|
|
|
|
|
|
#### Rollback plan
|
|
|
|
|
|
|
|
|
|
- Delete the deployed configurations.
|
|
|
|
|
- Update the router `Virtual Service Port Mapping` to set back the Traefik Ingress form the Pi4 host.
|
|
|
|
|
- Delete the deployed configurations.
|
|
|
|
|
|
|
|
|
|
### Part 4
|
|
|
|
|
|
|
|
|
|
### Part 5
|
|
|
|
|
So far we deployed an Internal LB, but it is being unused, therefore:
|
|
|
|
|
|
|
|
|
|
- [ ] Deploy an Istio Egress Gateway to allow egress towards the "core" local services.
|
|
|
|
|
- [ ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Rollback plan
|
|
|
|
|
|
|
|
|
|
- As much, delete the deployed configurations.
|
|
|
|
|
|
|
|
|
|
### Part 6
|
|
|
|
|
|
|
|
|
|
- Deploy an Ingress LB for local thingies
|
|
|
|
|
- Update local DNS records accordingly.
|
|
|
|
|
|
|
|
|
|
#### Rollback plan
|
|
|
|
|
|
|
|
|
|
- As much, delete the deployed configurations.
|
|
|
|
|
- [x] Update local DNS records accordingly.
|
|
|
|
|
|
|
|
|
|
### Part 7
|
|
|
|
|
|
|
|
|
@ -214,7 +211,6 @@ Current Issue? For X and y, I need to wait for a while for the DNS provider to r
|
|
|
|
|
### Part 9
|
|
|
|
|
|
|
|
|
|
- [ ] ~~Wipe~~ (**don't wipe** just use a different drive) and recreate the current `Kluster`, this time using the Pi4 as a _master_, and the 2 Orange Pi5 as _slaves_ (this will require updating the DNS/DHCP local services).
|
|
|
|
|
- [ ] Set the CM configuration to use "Production" `let's encrypt CA`. aka. non-dev.
|
|
|
|
|
- [ ] Deploy Istio security.
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
@ -891,52 +887,77 @@ x-xss-protection: 1; mode=block
|
|
|
|
|
|
|
|
|
|
### Deploy
|
|
|
|
|
|
|
|
|
|
Since there only can be a single `IstioOperator` kind object per file, I had to spread them in multiple files:
|
|
|
|
|
|
|
|
|
|
- IstioOperator_IstioConfig.yaml
|
|
|
|
|
- IstioOperator_IstioEgress.yaml
|
|
|
|
|
- IstioOperator_LocalIngress.yaml
|
|
|
|
|
- IstioOperator_PublicIngress.yaml
|
|
|
|
|
|
|
|
|
|
#### IstioOperator_IstioConfig.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f P2_Ingress_Public_LB
|
|
|
|
|
istioctl install -f P2_Istio_Config/IstioOperator_IstioConfig.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
service/istio-public-ingress created
|
|
|
|
|
deployment.apps/istio-public-ingress created
|
|
|
|
|
role.rbac.authorization.k8s.io/istio-public-ingress-sds created
|
|
|
|
|
rolebinding.rbac.authorization.k8s.io/istio-public-ingress-sds created
|
|
|
|
|
gateway.networking.istio.io/http-to-https created
|
|
|
|
|
<pre>This will install the Istio 1.18.2 minimal profile with ["Istio core" "Istiod"] components into the cluster. Proceed? (y/N) y
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Istio core installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Istiod installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete
|
|
|
|
|
Making this installation the default for injection and validation.</pre>
|
|
|
|
|
|
|
|
|
|
#### IstioOperator_IstioEgress.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P2_Istio_Config/IstioOperator_IstioEgress.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Check
|
|
|
|
|
<pre><span style="color:#7F3FBF">✔</span> Egress gateways installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
|
|
|
|
|
|
|
|
|
|
#### IstioOperator_LocalIngress.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P2_Istio_Config/IstioOperator_LocalIngress.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete</pre>
|
|
|
|
|
|
|
|
|
|
#### IstioOperator_PublicIngress.yaml
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
istioctl install -y -f P2_Istio_Config/IstioOperator_PublicIngress.yaml
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
<pre><span style="color:#7F3FBF">✔</span> Ingress gateways installed
|
|
|
|
|
<span style="color:#7F3FBF">✔</span> Installation complete </pre>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Check resources created exist.
|
|
|
|
|
|
|
|
|
|
We can see the LB running with the IP `192.168.1.80`.
|
|
|
|
|
|
|
|
|
|
> **Disclaimer:**\
|
|
|
|
|
> I use MetalLB to assist on the external IP provisioning for the service(s) from the Kubernetes cluster.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get svc -owide -n istio-system -l app=istio-public-ingress
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
|
|
|
|
istio-public-ingress LoadBalancer 10.104.158.106 192.168.1.80 80:32029/TCP,443:32644/TCP 87s app=istio-public-ingress,istio=public-ingress
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Test HTTP to HTTPS redirect
|
|
|
|
|
> I use MetalLB to assist on the external IP provisioning for the service(s) from the Kubernetes cluster.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
curl gitea.filterhome.xyz -I
|
|
|
|
|
kubectl get svc -n istio-system -owide -l istio | grep LoadBalancer
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
HTTP/1.1 301 Moved Permanently
|
|
|
|
|
location: https://gitea.filterhome.xyz/
|
|
|
|
|
date: Mon, 24 Jul 2023 22:54:10 GMT
|
|
|
|
|
server: istio-envoy
|
|
|
|
|
transfer-encoding: chunked
|
|
|
|
|
```
|
|
|
|
|
<pre>
|
|
|
|
|
egress-gw <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.106.14.109 192.168.1.90 80:31576/TCP,443:30248/TCP 2m44s app=istio-egress-gw,istio=egress-gw
|
|
|
|
|
istio-local-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.107.40.136 192.168.1.81 15021:31546/TCP,80:31242/TCP,443:32271/TCP 2m6s app=istio-local-ingress,istio=local-ingress
|
|
|
|
|
istio-public-ingress <span style="color:#FF7F7F"><b>LoadBalancer</b></span> 10.100.188.236 192.168.1.80 15021:30125/TCP,80:30999/TCP,443:32596/TCP 100s app=istio-public-ingress,istio=public-ingress
|
|
|
|
|
</pre>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Part 3
|
|
|
|
|
|
|
|
|
|
### Deploy
|
|
|
|
|
### Deploy the configurations towards the "external" resources/services.
|
|
|
|
|
|
|
|
|
|
> **Note:**\
|
|
|
|
|
> They are external respective to the Kubernetes cluster, not to the "local/home" network.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f P3_External_Services -R
|
|
|
|
@ -947,6 +968,7 @@ namespace/external created
|
|
|
|
|
destinationrule.networking.istio.io/filebrowser created
|
|
|
|
|
serviceentry.networking.istio.io/filebrowser-se created
|
|
|
|
|
virtualservice.networking.istio.io/filebrowser-vs created
|
|
|
|
|
gateway.networking.istio.io/http-to-https created
|
|
|
|
|
gateway.networking.istio.io/public-gateway created
|
|
|
|
|
destinationrule.networking.istio.io/gitea created
|
|
|
|
|
serviceentry.networking.istio.io/gitea-se created
|
|
|
|
@ -960,18 +982,34 @@ serviceentry.networking.istio.io/tube-se created
|
|
|
|
|
virtualservice.networking.istio.io/tube-vs created
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Since the services are expecting a certificate, we can't test the services so far, yet, we can test how the incoming HTTP traffic through the port 80, will be forwarded to HTTPS.
|
|
|
|
|
### Deploy the `cert-manager` CRDs.
|
|
|
|
|
|
|
|
|
|
## Part 4
|
|
|
|
|
|
|
|
|
|
### Router configuration.
|
|
|
|
|
|
|
|
|
|
I have set the Istio deployed LB, as a ingress resource for my home router.
|
|
|
|
|
|
|
|
|
|
### Deploy CM
|
|
|
|
|
As per [cer-manager installation guide through `kubectl`](https://cert-manager.io/docs/installation/kubectl/), I ran the following command to install the CRDs:
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f P4_Certificate_Manager
|
|
|
|
|
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml```
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
|
> I installed a while beforehand.
|
|
|
|
|
|
|
|
|
|
### Verify `cert-manager` installation.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get pods --namespace cert-manager
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
NAME READY STATUS RESTARTS AGE
|
|
|
|
|
cert-manager-5f68c9c6dd-q7mfk 1/1 Running 1 (33h ago) 7d7h
|
|
|
|
|
cert-manager-cainjector-57d6fc9f7d-9c2mp 1/1 Running 1 (33h ago) 7d7h
|
|
|
|
|
cert-manager-webhook-5b7ffbdc98-g5z98 1/1 Running 0 7d7h
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Set `cert-manager` config to provision LE certificates through ACME HTTP01 Challenge.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl apply -f P3_Certificate_Manager
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
@ -979,10 +1017,11 @@ clusterissuer.cert-manager.io/letsencrypt-public created
|
|
|
|
|
certificate.cert-manager.io/filterhome-domain-cert-public created
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Check CM
|
|
|
|
|
### Monitor the Certificate Provisioning process
|
|
|
|
|
|
|
|
|
|
#### Check Certificate Issuer status
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get -n istio-system certificate filterhome-domain-cert-public -o jsonpath='{.metadata.name}{"\t"}{.status.conditions[].reason}{"\t"}{.status.conditions[].message}{"\n"}'
|
|
|
|
|
```
|
|
|
|
@ -991,43 +1030,86 @@ kubectl get -n istio-system certificate filterhome-domain-cert-public -o jsonpat
|
|
|
|
|
filterhome-domain-cert-public DoesNotExist Issuing certificate as Secret does not exist
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Check Certificate Issuer events
|
|
|
|
|
Once the certificates are provisioned, it will look like this:
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
filterhome-domain-cert-public Ready Certificate is up to date and has not expired
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Certificate logs
|
|
|
|
|
|
|
|
|
|
This can take some time, do not despair (yet).
|
|
|
|
|
|
|
|
|
|
If you notice that it's stuck for some minutes, consider checking the access logs.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get events -n istio-system --field-selector involvedObject.name=filterhome-domain-cert-public,involvedObject.kind=Certificate --sort-by=.metadata.creationTimestamp --watch
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
LAST SEEN TYPE REASON OBJECT MESSAGE
|
|
|
|
|
4s Normal Issuing certificate/filterhome-domain-cert-public Issuing certificate as Secret does not exist
|
|
|
|
|
0s Normal Generated certificate/filterhome-domain-cert-public Stored new private key in temporary Secret resource "filterhome-domain-cert-public-v5ss4"
|
|
|
|
|
1s Normal Requested certificate/filterhome-domain-cert-public Created new CertificateRequest resource "filterhome-domain-cert-public-s4vvt"
|
|
|
|
|
LAST SEEN TYPE REASON OBJECT MESSAGE
|
|
|
|
|
0s Normal Issuing certificate/filterhome-domain-cert-public Issuing certificate as Secret does not exist
|
|
|
|
|
0s Normal Generated certificate/filterhome-domain-cert-public Stored new private key in temporary Secret resource "filterhome-domain-cert-public-qmnzd"
|
|
|
|
|
0s Normal Requested certificate/filterhome-domain-cert-public Created new CertificateRequest resource "filterhome-domain-cert-public-csfpk"
|
|
|
|
|
0s Normal Issuing certificate/filterhome-domain-cert-public The certificate has been successfully issued
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Once the certificates are provided access the services, weather from the CLI or Browser, on my scenario I used the browser.
|
|
|
|
|
#### Access Logs
|
|
|
|
|
|
|
|
|
|
Since (as per the moment) I am using the testing `Let's encrypt` CA service, the certificates are flagged as "self-signed".
|
|
|
|
|
On the access logs we can see the `ACME Challenge` requests, and that they are being routed towards the service `cm-acme-http-solver-qvpvl.istio-system.svc.cluster.local`.
|
|
|
|
|
|
|
|
|
|
As well, some of the first entries result in `503 - no_healthy_upstream`, why? Simply because the backend is still waking up, give it a min or two before being afraid if this messages maintain.
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
curl gitea.filterhome.xyz
|
|
|
|
|
kubectl logs -l istio=public-ingress -n istio-system -f
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
curl: (60) SSL certificate problem: unable to get local issuer certificate
|
|
|
|
|
More details here: https://curl.se/docs/sslcerts.html
|
|
|
|
|
|
|
|
|
|
curl failed to verify the legitimacy of the server and therefore could not
|
|
|
|
|
establish a secure connection to it. To learn more about this situation and
|
|
|
|
|
how to fix it, please visit the web page mentioned above.
|
|
|
|
|
[2023-07-26T22:22:13.641Z] "GET /.well-known/acme-challenge/abcd1234abcd1234abcd1234abcd1234abcd1234 HTTP/1.1" 503 UH no_healthy_upstream - "-" 0 19 0 - "172.17.247.0" "cert-manager-challenges/v1.12.0 (linux/arm64) cert-manager/abcd1234abcd1234abcd1234abcd1234abcd1234" "987765-asdv-asdf-978987979789" "gitea.filterhome.xyz" "-" outbound|8089||cm-acme-http-solver-qvpvl.istio-system.svc.cluster.local - 172.17.121.127:8080 172.17.247.0:13176 - -
|
|
|
|
|
[2023-07-26T22:22:23.538Z] "GET /.well-known/acme-challenge/abcd1234abcd1234abcd1234abcd1234abcd1234 HTTP/1.1" 200 - via_upstream - "-" 0 87 6 3 "172.17.247.0" "cert-manager-challenges/v1.12.0 (linux/arm64) cert-manager/abcd1234abcd1234abcd1234abcd1234abcd1234" "987765-asdv-asdf-978987979789" "gitea.filterhome.xyz" "172.17.121.95:8089" outbound|8089||cm-acme-http-solver-qvpvl.istio-system.svc.cluster.local 172.17.121.127:44994 172.17.121.127:8080 172.17.247.0:57220 - -
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Part 5
|
|
|
|
|
### Test HTTP to HTTPS redirect gateway
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
curl http://gitea.filterhome.xyz -I
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
HTTP/1.1 301 Moved Permanently
|
|
|
|
|
location: https://gitea.filterhome.xyz/
|
|
|
|
|
date: Wed, 26 Jul 2023 22:34:13 GMT
|
|
|
|
|
server: istio-envoy
|
|
|
|
|
transfer-encoding: chunked
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Test HTTPS access towards services works correclt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
curl gitea.filterhome.xyz -I --follow-redirects
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
HTTP/2 200
|
|
|
|
|
date: Wed, 26 Jul 2023 22:33:53 GMT
|
|
|
|
|
x-envoy-upstream-service-time: 6
|
|
|
|
|
server: istio-envoy
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Part 4
|
|
|
|
|
|
|
|
|
|
# I am here <----
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Egress Gateway
|
|
|
|
|
|
|
|
|
|
#### I am here <----
|
|
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
@ -1166,4 +1248,116 @@ serviceentry.networking.istio.io/jelly-se ["jelly.filterhome.xyz","jelly.filte
|
|
|
|
|
|
|
|
|
|
At the end turns out I didn't delete this both resources (DestinationRule and ServiceEntry).
|
|
|
|
|
|
|
|
|
|
This called for a renaming of the configuration, so this doesn't occur _again_ (until occurs).
|
|
|
|
|
This called for a renaming of the configuration, so this doesn't occur _again_ (until occurs).
|
|
|
|
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
## 2X1 Able to get `staging` LE certs, but not the `production` one when using custom Istio selector, and only able to get `production` LE certs when using the default `istio: ingressgateway` selector.
|
|
|
|
|
|
|
|
|
|
Did _some_ ~~lots of~~ tests in order to determine what was happening, since it was me trying to figure out what it was, I didn't really documented a lot.
|
|
|
|
|
|
|
|
|
|
This issue popped out 2 questions:
|
|
|
|
|
|
|
|
|
|
- Which is the difference between LE `stagging` and `production`?
|
|
|
|
|
- Which is the difference between the default `istio: ingressgateway` and `istio: whateveriwannaputhere`.
|
|
|
|
|
|
|
|
|
|
### Which is the difference between LE `stagging` and `production`?
|
|
|
|
|
|
|
|
|
|
As far I understood by reading through `The Internet`, the `staging` environment is intended for testing (duh), therefore is a lot more generous when performing the validations.
|
|
|
|
|
|
|
|
|
|
How much generous? I suspect a whole a lot bunch.
|
|
|
|
|
|
|
|
|
|
### Which is the difference between LE `stagging` and `production`?
|
|
|
|
|
|
|
|
|
|
Let's check the "access logs" differences when using one selector or another.
|
|
|
|
|
|
|
|
|
|
> **Note**:\
|
|
|
|
|
> The setting `meshConfig.accessLogFile` was set to `/dev/stdout` in the Istio configuration during the installation of such, it's required to capture the Istio deployment access logs.\
|
|
|
|
|
> Therefore, in case you don't see any activity, consider checking for this.
|
|
|
|
|
|
|
|
|
|
#### Custom
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get logs -l istio=public-ingress -n istio-system
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
[2023-07-26T17:52:19.189Z] "GET /.well-known/acme-challenge/abcd123412341234-12 HTTP/1.1" 404 - via_upstream - "-" 0 153 6 4 "172.17.121.64" "cert-manager-challenges/v1.12.0 (linux/arm64) cert-manager/bd192c4f76dd883f9ee908035b894ffb49002384" "25dd1fdc-fad0-4d50-8acc-fd347374a5d1" "pur.filterhome.xyz" "172.17.247.5:80" outbound|80||hell.default.svc.cluster.local 172.17.247.28:33176 172.17.247.28:8080 172.17.121.64:4678 - -
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### default ingressgateway
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl get logs -l istio=ingressgateway -n istio-system
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
[2023-07-26T17:57:19.789Z] "GET /.well-known/acme-challenge/abcd123412341234abcd123412341234 HTTP/1.1" 503 UH no_healthy_upstream - "-" 0 19 0 - "192.168.1.10" "cert-manager-challenges/v1.12.0 (linux/arm64) cert-manager/bd192c4f76dd883f9ee908035b894ffb49002384" "c968e808-1e51-4394-a50e-dbd90ead0b6e" "tr.filterhome.xyz" "-" outbound|8089||cm-acme-http-solver-tqdlw.istio-system.svc.cluster.local - 172.17.121.109:8080 192.168.1.10:2244 - -
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
> **Note:**\
|
|
|
|
|
> Hey you, I am very sure that you read the logs (as it should be), you might was well have noticed that the second one, which I claim to be the one setting that addressed the issue, is actually returning a status code 503.\
|
|
|
|
|
> Basically the destination service was booting up, nothing more.\
|
|
|
|
|
> Once it was up it allowed the correct provisioning of the certificate(s).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Conclusion
|
|
|
|
|
|
|
|
|
|
Yes, that's too much text to read something straight up, but first, we need to understand the requirements to pass the LE certificate provisioning:
|
|
|
|
|
|
|
|
|
|
- The request must return a `200` status code.
|
|
|
|
|
- The response received but contain a very specific body, which will be generated based on the acme-challenge key used.
|
|
|
|
|
|
|
|
|
|
Regarding the 2 `access-log` logs shared above, the main thing I wanted to point out, is the backend service.
|
|
|
|
|
|
|
|
|
|
- When using a custom selector, the service is the expected backend service, which on this scenario is `hell.default.svc.cluster.local`.
|
|
|
|
|
|
|
|
|
|
- When using the `ingressgateway` selector, the destination service is `cm-acme-http-solver-tqdlw.istio-system.svc.cluster.local`.
|
|
|
|
|
|
|
|
|
|
Where does the service `cm-acme-http-solver-tqdlw.istio-system.svc.cluster.local` come from? It's important **because it's the service that should reply to the requests from the ACME HTTP01 challenge**.
|
|
|
|
|
|
|
|
|
|
Meanwhile, I was using the custom selector, I used the following command:
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
kubectl exec "$(kubectl get pod -l app=istiod -o jsonpath={.items..metadata.name} -n istio-system)" -n istio-system curl localhost:8080/debug/configz | jq | less
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Through the output, I could see an Istio gateway object being listed, with the name `cm-acme-http-solver-ww9t7-istio-autogenerated-k8s-ingress-istio-system`, and as well casually it had the selector set to `istio: ingressgateway`.
|
|
|
|
|
|
|
|
|
|
That was an advancement, yet I am still confused regarding why is it choosing `istio: ingressgateway`, who is creating this resource, and on top of that, why can't I seem to be able to select the selector manually.
|
|
|
|
|
|
|
|
|
|
As well, the `creationTimestamp` field was set to `null`, for a reason I wasn't able to notice the existence of this resource previously, because it wasn't created.
|
|
|
|
|
Therefore, I understand that it's a resource that creates spontaneously and does not go through the full process of creating a resource on Istio.
|
|
|
|
|
|
|
|
|
|
So far I only could consider 2 locations that could be related to this and should allow me for some alteration of the fields from the `Gateway` resource located.
|
|
|
|
|
|
|
|
|
|
- The Certificate Manager `ClusterIssuer` configurations.
|
|
|
|
|
- Isto's MeshConfig configuration.
|
|
|
|
|
|
|
|
|
|
Long story short, I didn't found anything on the Certificate Manager.
|
|
|
|
|
|
|
|
|
|
Regarding Istio MeshConfig, what did I do?
|
|
|
|
|
|
|
|
|
|
Very simple, opened up the [following documentation regarding `Global Mesh Options.MeshConfig`](https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/), pressed `Control+F` and searched for `ingressgateway`.
|
|
|
|
|
|
|
|
|
|
I was able to find the following highlighter fields:
|
|
|
|
|
- ingressService
|
|
|
|
|
- ingressSelector
|
|
|
|
|
|
|
|
|
|
Therefore, what did I do?
|
|
|
|
|
|
|
|
|
|
Update those fields through the usage of an `IstioOperator` config file, to match the resource I was using to accept incoming traffic.
|
|
|
|
|
|
|
|
|
|
Guess what happened once I updated the fields:
|
|
|
|
|
|
|
|
|
|
- ingressService
|
|
|
|
|
- ingressSelector
|
|
|
|
|
|
|
|
|
|
It worked.
|
|
|
|
|
|
|
|
|
|
ACME challenge was returning a 200 status code.
|
|
|
|
|
|
|
|
|
|
This left me some questions, especially **how many things can be affected by this behavior**.
|
|
|
|
|
|
|
|
|
|
As well the hassle of this issue, provided more reasons to use the `ACME DNS01` challenge instead of the `ACME HTTP01`, which will be configured as soon as available.
|
|
|
|
|
|
|
|
|
|