Compare commits

...

24 Commits

Author SHA1 Message Date
a4419990ea Merge branch 'main' into dev 2024-01-15 21:45:39 +01:00
990bada6e1 Merge pull request 'dev2_Jan_15_2024' (#67) from dev2_Jan_15_2024 into dev
Reviewed-on: #67
2024-01-15 21:44:43 +01:00
e088cf6659 Merge branch 'dev' into dev2_Jan_15_2024 2024-01-15 21:44:34 +01:00
ee13b20458 Fixed lowercase typo on folder name.
Quick made the README.md
2024-01-15 21:36:27 +01:00
2582e15e1a Added couple relevant links to the topic, nothing else. 2024-01-15 21:32:45 +01:00
404c036883 Deleted/Moved to main 2024-01-15 21:32:29 +01:00
6cb3c9fa50 Added simple monitoring examples.
They are based off Helm PrometheusStack community Chart.
2024-01-15 21:32:14 +01:00
f86ac97255 Merge pull request 'dev - 2023/10/14' (#66) from dev into main
Reviewed-on: https://gitea.filterhome.xyz/ofilter/Istio_Examples/pulls/66
2023-10-14 13:06:28 +02:00
2fb608c5bd Merge branch 'main' into dev 2023-10-14 13:06:17 +02:00
942a3bf8ae Updated Global README.md 2023-10-14 13:03:28 +02:00
74e5b9d5f0 Inserted into the respective directory. 2023-10-14 13:01:46 +02:00
7e004697a9 Fixed link refference 2023-10-14 13:00:52 +02:00
c02a355a95 Renamed folder 2023-10-14 13:00:22 +02:00
d2b1dc2284 Speed documented 2023-10-14 12:59:07 +02:00
f0ce2ae68d Fixed format 2023-10-14 12:50:13 +02:00
b6657bdd4c Renamed files and deglossed it's contents. 2023-10-14 12:30:39 +02:00
68efcde1fa Added a section README.md to the sections:
- 90-MixConfigs
- 99-resources
2023-10-14 12:22:17 +02:00
1cf75d5902 Deglossed the files. 2023-10-14 12:21:42 +02:00
df8eea778c Documented examples from 11-Fault_Injection:
- 01-FaultInjection-delay
- 02-FaultInjection-abort

Added a section README.md to the section:

- 11-Fault_Injection
2023-10-14 12:11:34 +02:00
c1aec3ae4c Renamed file to capitalized first letter. 2023-10-14 11:21:30 +02:00
957dbfcf84 Updated global README.md 2023-10-14 11:19:29 +02:00
e883755680 Added section README.md for:
- 10-mTLS_PeerAuthentication

Renamed its files to use a capital letter at the start of the files.
2023-10-14 11:14:19 +02:00
0a1e36dcaf Added section README.md for:
- 09-Ingress
2023-10-14 11:08:51 +02:00
1542aaafcc Merge pull request 'dev - mid update' (#63) from dev into main
Reviewed-on: https://gitea.filterhome.xyz/ofilter/Istio_Examples/pulls/63
2023-07-01 14:21:59 +00:00
68 changed files with 1854 additions and 498 deletions

View File

@@ -1,102 +0,0 @@
# Continues from
- 05-hello_world_1_Service_Entry
# Description
On this example compares the behavior between setting up the MeshConfig `OutboundTrafficPolicy.mode` setting to `REGISTRY_ONLY` and `ALLOW_ANY`.
- ALLOW_ANY: Allows all egress/outbound traffic from the mesh.
- REGISTRY_ONLY: Restricted to services that figure in the service registry a and the ServiceEntry objects.
More info regarding this configuration at the pertinent documentation (https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode)
## Runthrough
### Set ALLOW_ANY outbound traffic policy
```shell
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY
```
### Deploy resources
```shell
$ kubectl apply -f ./
service/helloworld created
deployment.apps/helloworld-nginx created
serviceentry.networking.istio.io/external-svc created
gateway.networking.istio.io/helloworld-gateway created
virtualservice.networking.istio.io/helloworld-vs created
```
### Get LB IP
```shell
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.97.47.216 192.168.1.50 15021:31316/TCP,80:32012/TCP,443:32486/TCP 39h
```
### Test deployments
```shell
$ curl 192.168.1.50/helloworld -I
HTTP/1.1 200 OK
server: istio-envoy
date: Thu, 20 Apr 2023 18:03:18 GMT
content-type: text/html
content-length: 615
last-modified: Tue, 28 Mar 2023 15:01:54 GMT
etag: "64230162-267"
accept-ranges: bytes
x-envoy-upstream-service-time: 73
```
```shell
$ curl 192.168.1.50/external -I
HTTP/1.1 200 OK
date: Thu, 20 Apr 2023 18:03:24 GMT
content-type: text/html
content-length: 5186
last-modified: Mon, 17 Mar 2014 17:25:03 GMT
expires: Thu, 31 Dec 2037 23:55:55 GMT
cache-control: max-age=315360000
x-envoy-upstream-service-time: 228
server: istio-envoy
```
### Test egress the helloworld deployment
It returns a 301 code, meaning that it was able to reach the destination, and it was attempted to redirect the traffic from HTTP to HTTPS.
```shell
$ kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
HTTP/1.1 301 Moved Permanently
server: envoy
date: Thu, 20 Apr 2023 18:06:57 GMT
content-type: text/html
content-length: 169
location: https://wikipedia.com/
x-envoy-upstream-service-time: 65
```
### Set REGISTRY_ONLY outbound traffic policy
```shell
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
```
### Test (again) egress the helloworld deployment
It returns a 502 code, meaning that it wasn't able to reach the destination.
```shell
$ kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
HTTP/1.1 502 Bad Gateway
date: Thu, 20 Apr 2023 18:08:37 GMT
server: envoy
transfer-encoding: chunked
```

View File

@@ -1,57 +0,0 @@
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld.yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: nginx
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc
spec:
hosts:
- help.websiteos.com
# /websiteos/example_of_a_simple_html_page.htm
# - http://help.websiteos.com/websiteos/example_of_a_simple_html_page.htm
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
---

View File

@@ -1,10 +0,0 @@
# Examples
- 01-Outboud-Traffic-Policy
## Additional
https://istio.io/latest/docs/tasks/observability/distributed-tracing/mesh-and-proxy-config/

View File

@@ -1,2 +0,0 @@
https://raw.githubusercontent.com/istio/istio/release-1.17/samples/httpbin/sample-client/fortio-deploy.yaml

View File

@@ -0,0 +1,3 @@
https://github.com/istio-ecosystem/wasm-extensions
https://github.com/istio-ecosystem/wasm-extensions/tree/master/extensions/basic_auth

View File

@@ -1,23 +1,3 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 8080
name: http-web
targetPort: 80
protocol: TCP
- port: 8443
name: https-web
targetPort: 443
protocol: TCP
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:

View File

@@ -0,0 +1,20 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp-1
protocol: TCP
hosts:
- "*"
- port:
number: 443
name: tcp-2
protocol: TCP
hosts:
- "*"

View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 8080
name: http-web
targetPort: 80
protocol: TCP
- port: 8443
name: https-web
targetPort: 443
protocol: TCP
selector:
app: helloworld

View File

@@ -1,25 +1,4 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp-1
protocol: TCP
hosts:
- "*"
- port:
number: 443
name: tcp-2
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs

View File

@@ -0,0 +1,26 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: oriolfilter/https-nginx-demo
resources:
requests:
cpu: "100m"
imagePullPolicy: Always #Always
ports:
- containerPort: 80
- containerPort: 443

View File

@@ -0,0 +1,17 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https-web
protocol: HTTPS
hosts:
- "*"
tls:
mode: PASSTHROUGH

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- name: https
port: 8443
targetPort: 443
protocol: TCP
appProtocol: HTTPS
selector:
app: helloworld

View File

@@ -1,22 +1,4 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https-web
protocol: HTTPS
hosts:
- "*"
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs

View File

@@ -1,73 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- name: https
port: 8443
targetPort: 443
protocol: TCP
appProtocol: HTTPS
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: oriolfilter/https-nginx-demo
resources:
requests:
cpu: "100m"
imagePullPolicy: Always #Always
ports:
- containerPort: 80
- containerPort: 443
#---
#apiVersion: apps/v1
#kind: Deployment
#metadata:
# name: nginx
# labels:
# app: nginx
# version: v1
#spec:
# replicas: 1
# selector:
# matchLabels:
# app: nginx
# version: v1
# template:
# metadata:
# labels:
# app: nginx
# version: v1
# spec:
# # serviceAccountName: istio-helloworld
# containers:
# - name: nginx
# image: nginx
# resources:
# requests:
# cpu: "100m"
# imagePullPolicy: IfNotPresent
# ports:
# - containerPort: 80

View File

@@ -24,9 +24,9 @@ Bear in mind that when Istio is communicating with resources externals to the me
Also, policy enforcement is performed in the client side instead of the server side.
> **Note:**/
> For more information regarding the `resolution` field or the `location` field, refer to the following official Istio documentations:
> [ServiceEntry.Location](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Location)
> [ServiceEntry.Resolution](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution)
> For more information regarding the `resolution` field or the `location` field, refer to the following official Istio documentations:\
> - [ServiceEntry.Location](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Location)\
> - [ServiceEntry.Resolution](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution)
```yaml
apiVersion: networking.istio.io/v1alpha3

View File

@@ -1,57 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
credentialName: my-tls-cert-secret
mode: SIMPLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- name: https-vs
match:
- port: 80
- port: 443
route:
- destination:
host: helloworld.default.svc.cluster.local
port:
number: 8443
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: helloworld
namespace: default
spec:
host: helloworld.default.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: SIMPLE

View File

@@ -0,0 +1,25 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: nginx
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 80

View File

@@ -0,0 +1,14 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

View File

@@ -0,0 +1,338 @@
---
gitea: none
include_toc: true
---
# Description
Based on the previous example where we configured an external service through a `ServiceEntry` object, this example compares the behavior between setting up the MeshConfig `OutboundTrafficPolicy.mode` setting to `REGISTRY_ONLY` and `ALLOW_ANY`.
- ALLOW_ANY: Allows all egress/outbound traffic from the mesh.
- REGISTRY_ONLY: Restricted to services that figure in the service registry a and the ServiceEntry objects.
More info regarding this configuration at the pertinent documentation (https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode)
> **Note:**\
> For more information about the image used refer to [here](https://hub.docker.com/r/oriolfilter/https-nginx-demo)
# Based on
- [01-Service_Entry](../01-Service_Entry)
# Configuration
## Gateway
Deploys an Istio gateway that's listening to the port `80` for `HTTP` traffic.
It doesn't filter for any specific host.
The `selector` field is used to "choose" which Istio Load Balancers will have this gateway assigned to.
The Istio `default` profile creates a Load Balancer in the namespace `istio-system` that has the label `istio: ingressgateway` set, allowing us to target that specific Load Balancer and assign this gateway resource to it.
```shell
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
```
> **Note:**\
> The credentials resource is created further bellow through the [Walkthrough](#walkthrough) steps.
> **Note:**\
> For more information regarding the TLS mode configuration, refer to the following [Istio documentation regarding the TLS mode field](https://istio.io/latest/docs/reference/config/networking/gateway/#ServerTLSSettings-TLSmode).
## VirtualService
This configuration hosts 2 backends, 1 being the deployed service `helloworld.default.svc.cluster.local`, which will be accessible through the URL path `/helloworld`.
The second service will be accessible through the URL path `/external`, and will use as a backend the deployed `ServiceEntry` object, as well it has a timeout setting of 3 seconds.
This destination is the service that contains the `HTTPS` deployment, running over the port `8443`
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /helloworld
route:
- destination:
host: helloworld
port:
number: 80
rewrite:
uri: "/"
- timeout: 3s
match:
- uri:
exact: "/external"
route:
- destination:
host: help.websiteos.com
port:
number: 80
rewrite:
uri: "/websiteos/example_of_a_simple_html_page.htm"
headers:
request:
set:
HOST: "help.websiteos.com"
```
## Service
The service will forward incoming HTTP TCP traffic from the port `80`, towards the deployment port `80`.
```yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld
```
## Deployment
Nginx deployment listens to port 80.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: nginx
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 80
```
### ServiceEntry
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc
spec:
hosts:
- help.websiteos.com
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
```
## ServiceEntry
This `ServiceEntry` resource, defines as a destination the URL `help.websiteos.com`.
Note that location is set to `MESH_EXTERNAL` and that the resolution is set to `DNS`, this means that the resource is external to ou `Istio Service Mesh`, and the URL will be resolved through `DNS`
Bear in mind that when Istio is communicating with resources externals to the mesh, `mTLS` is disabled.
Also, policy enforcement is performed in the client side instead of the server side.
> **Note:**/
> For more information regarding the `resolution` field or the `location` field, refer to the following official Istio documentations:\
> - [ServiceEntry.Location](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Location)\
> - [ServiceEntry.Resolution](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution)
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc
spec:
hosts:
- help.websiteos.com
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
```
# Walkthrough
## Set ALLOW_ANY outbound traffic policy
First step will be to have the cluster with the `meshConfig.outboundTrafficPolicy.mode` setting set to `ALLOW_ANY`.
In case you are not using a "free to destroy" sandbox, you should update the setting through the `IstioOperator` object.
```shell
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY
```
## Deploy resources
```shell
kubectl apply -f ./
```
```text
deployment.apps/helloworld-nginx created
gateway.networking.istio.io/helloworld-gateway created
service/helloworld created
serviceentry.networking.istio.io/external-svc created
virtualservice.networking.istio.io/helloworld-vs created
```
## Get LB IP
```shell
kubectl get svc istio-ingressgateway -n istio-system
```
```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.97.47.216 192.168.1.50 15021:31316/TCP,80:32012/TCP,443:32486/TCP 39h
```
## Test deployments
```shell
curl 192.168.1.50/helloworld -I
```
```text
HTTP/1.1 200 OK
server: istio-envoy
date: Sat, 14 Oct 2023 10:53:45 GMT
content-type: text/html
content-length: 615
last-modified: Tue, 15 Aug 2023 17:03:04 GMT
etag: "64dbafc8-267"
accept-ranges: bytes
x-envoy-upstream-service-time: 53
```
```shell
curl 192.168.1.50/external -I
```
```text
HTTP/1.1 200 OK
date: Sat, 14 Oct 2023 10:54:13 GMT
content-type: text/html
content-length: 5186
last-modified: Mon, 17 Mar 2014 17:25:03 GMT
expires: Thu, 31 Dec 2037 23:55:55 GMT
cache-control: max-age=315360000
x-envoy-upstream-service-time: 306
server: istio-envoy
```
## Test egress the helloworld deployment
It returns a 301 code, meaning that it was able to reach the destination, and it was attempted to redirect the traffic from HTTP to HTTPS.
```shell
kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
```
```text
HTTP/1.1 301 Moved Permanently
server: envoy
date: Sat, 14 Oct 2023 10:54:34 GMT
content-type: text/html
content-length: 169
location: https://wikipedia.com/
x-envoy-upstream-service-time: 61
```
## Set REGISTRY_ONLY outbound traffic policy
```shell
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
```
In case you are not using a "free to destroy" sandbox, you should update the setting through the `IstioOperator` object.
## Test (again) egress the helloworld deployment
It returns a 502 code, meaning that it wasn't able to reach the destination.
```shell
kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
```
```text
HTTP/1.1 502 Bad Gateway
date: Thu, 20 Apr 2023 18:08:37 GMT
server: envoy
transfer-encoding: chunked
```
This allowed us to confirm how the setting `outboundTrafficPolicy.mode` influences the reachability of the traffic.
## Cleanup
```shell
kubectl delete -f ./
```
```text
deployment.apps "helloworld-nginx" deleted
gateway.networking.istio.io "helloworld-gateway" deleted
service "helloworld" deleted
serviceentry.networking.istio.io "external-svc" deleted
virtualservice.networking.istio.io "helloworld-vs" deleted
```
# Links of Interest
- https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#controlled-access-to-external-services
- https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld

View File

@@ -0,0 +1,13 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc
spec:
hosts:
- help.websiteos.com
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL

View File

@@ -1,19 +1,3 @@
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:

View File

@@ -1,20 +1,3 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 8443
name: https
targetPort: 443
protocol: TCP
appProtocol: https
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:

View File

@@ -0,0 +1,13 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: helloworld
namespace: default
spec:
host: helloworld.default.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: SIMPLE

View File

@@ -0,0 +1,23 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
credentialName: my-tls-cert-secret
mode: SIMPLE

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 8443
name: https
targetPort: 443
protocol: TCP
appProtocol: https
selector:
app: helloworld

View File

@@ -0,0 +1,19 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- name: https-vs
match:
- port: 80
- port: 443
route:
- destination:
host: helloworld.default.svc.cluster.local
port:
number: 8443

View File

@@ -6,4 +6,12 @@ This section will focus on the interaction with the backend and routing the traf
01-Service_Entry
02-HTTPS-backend
03-HTTPS-backend-with-mTLS (TODO)
03-Outboud-Traffic-Policy
04-HTTPS-backend-with-mTLS (TODO)
## Heads up
On the example `03-Outboud-Traffic-Policy`, Istio's `meshConfig.outboundTrafficPolicy` will require to be modified.
On the example it's used the `istioctl install` command to set that up, as I assume you are testing this examples in a sandbox that you are free to "destroy".

15
09-Ingress/README.md Normal file
View File

@@ -0,0 +1,15 @@
## Description
This section is focused in the Ingress Load balancers and how to install or set them up.
Nevertheless, currently only contains how to install through the IstioOperator.
## Examples
- 01-Ingress-IstioOperator
## Links of interest
- https://istio.io/latest/docs/setup/additional-setup/gateway/

View File

@@ -0,0 +1,10 @@
## Description
This section is focused regarding mTLS, using Kiali dashboard to confirm which services have mTLS enabled, how to enable and disable mTLS, and as well how to apply a desired configuration targeting a specific port.
## Examples
- 01-mTLS
- 02-disable-mTLS
- 03-mTLS-per-port-settings

View File

@@ -1,18 +1,3 @@
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld.yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:

View File

@@ -0,0 +1,14 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

View File

@@ -0,0 +1,251 @@
---
gitea: none
include_toc: true
---
# Description
Based on the [previous example](../../01-Getting_Started/01-hello_world_1_service_1_deployment), we configure a "fault" that will make the backend take 10 more seconds before receiving the request.
This will be applied to a 90% of the incoming traffic that matches the rule and will allow to confirm in a secure environment how the application would behave in such difficult situations, and apply the modifications required to avoid issue in case there would be a network issue.
This example configures:
Generic Kubernetes resources:
- 1 Service
- 1 Deployments
Istio resources:
- 1 Gateway
- 1 Virtual Service
# Based on
- [01-hello_world_1_service_1_deployment](../../01-Getting_Started/01-hello_world_1_service_1_deployment)
- https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection-Delay
# Configuration
## Service
Creates a service named `helloworld`.
This service listens for the port `80` expecting `HTTP` traffic and will forward the incoming traffic towards the port `80` from the destination pod.
```yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld
```
## Deployment
Deploys a Nginx server that listens for the port `80`.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: nginx
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 80
```
## Gateway
Deploys an Istio gateway that's listening to the port `80` for `HTTP` traffic.
It doesn't filter for any specific host.
The `selector` field is used to "choose" which Istio Load Balancers will have this gateway assigned to.
The Istio `default` profile creates a Load Balancer in the namespace `istio-system` that has the label `istio: ingressgateway` set, allowing us to target that specific Load Balancer and assign this gateway resource to it.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
```
## VirtualService
The Virtual Service resources are used to route and filter the received traffic from the gateway resources, and route it towards the desired destination.
On this example we select the gateway `helloworld-gateway`, which is the [gateway that 's described in the `Gateway` section](#gateway).
On this resource, we are also not limiting the incoming traffic to any specific host, allowing for all the incoming traffic to go through the rules set.
Here we created a rule that will be applied on `HTTP` related traffic (including `HTTPS` and `HTTP2`) when the destination path is exactly `/helloworld`.
This traffic will be forwarded to the port `80` of the destination service `helloworld` (the full path URL equivalent would be `helloworld.$NAMESPACE.svc.cluster.local`).
There will be an internal URL rewrite set, as if the URL is not modified, it would attempt to reach to the `/helloworld` path from the Nginx deployment, which currently has no content and would result in an error code `404` (Not found).
Additionally, we apply a "fault", where a 90% of the traffic will have 10 seconds extra of delay.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /helloworld
route:
- destination:
host: helloworld
port:
number: 80
rewrite:
uri: "/"
fault:
delay:
percentage:
value: 90
fixedDelay: 10s
```
# Walkthrough
## Deploy resources
Deploy the resources.
```shell
kubectl apply -f ./
```
```text
deployment.apps/helloworld-nginx created
gateway.networking.istio.io/helloworld-gateway created
service/helloworld created
virtualservice.networking.istio.io/helloworld-vs created
```
## Wait for the pods to be ready
Wait for the Nginx deployments to be up and ready.
```shell
kubectl get deployment helloworld-nginx -w
```
```text
NAME READY UP-TO-DATE AVAILABLE AGE
helloworld-nginx 1/1 1 1 12s
```
## Test the service
### Get LB IP
To perform the desired tests, we will need to obtain the IP Istio Load Balancer that we selected in the [Gateway section](#gateway).
On my environment, the IP is the `192.168.1.50`.
```shell
kubectl get svc -l istio=ingressgateway -A
```
```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.97.47.216 192.168.1.50 15021:31316/TCP,80:32012/TCP,443:32486/TCP 39h
```
### helloworld
We will use the `curl` command and feed it a template to provide us with some timings and as well of confirming the status code from the request.
Since the fault that we set had a 90% chance of triggering, if you are "unlucky", and get instantly the response from the backend, you might need to run the command multiple times in order to get the fault triggered.
```shell
curl -w @- -o /dev/null -s 192.168.1.50/helloworld <<'EOF'
http_code: %{http_code}\n
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
EOF
```
```text
http_code: 200
time_namelookup: 0.000010
time_connect: 0.000671
time_appconnect: 0.000000
time_pretransfer: 0.000689
time_redirect: 0.000000
time_starttransfer: 10.008781
----------
time_total: 10.008817
```
From the command output, we can observe that the request took more than 10 seconds to be replied, and as well the status code was successful, meaning that the application was able to handle the request.
## Cleanup
Finally, a cleanup from the resources deployed.
```shell
kubectl delete -f ./
```
```text
deployment.apps "helloworld-nginx" deleted
gateway.networking.istio.io "helloworld-gateway" deleted
service "helloworld" deleted
virtualservice.networking.istio.io "helloworld-vs" deleted
```
# Links of interest
- https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection-Delay

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld

View File

@@ -1,19 +1,3 @@
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
@@ -35,7 +19,7 @@ spec:
rewrite:
uri: "/"
fault:
delay:
delay:
percentage:
value: 90
value: 90
fixedDelay: 10s

View File

@@ -1,18 +1,3 @@
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld.yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:

View File

@@ -0,0 +1,14 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

View File

@@ -0,0 +1,240 @@
---
gitea: none
include_toc: true
---
# Description
Based on the [previous example](../02-FaultInjection-abort), where we configure a "fault" that will make the backend take 10 more seconds before receiving the request, this time will make the request abort, resulting in a failed request (503 status code).
This will be applied to a 90% of the incoming traffic that matches the rule and will allow to confirm in a secure environment how the application would behave in such difficult situations, and apply the modifications required to avoid issue in case there would be a network issue.
This example configures:
Generic Kubernetes resources:
- 1 Service
- 1 Deployments
Istio resources:
- 1 Gateway
- 1 Virtual Service
# Based on
- [02-FaultInjection-abort](../02-FaultInjection-abort)
- https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection-Abort
# Configuration
## Service
Creates a service named `helloworld`.
This service listens for the port `80` expecting `HTTP` traffic and will forward the incoming traffic towards the port `80` from the destination pod.
```yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld
```
## Deployment
Deploys a Nginx server that listens for the port `80`.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-nginx
labels:
app: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: nginx
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 80
```
## Gateway
Deploys an Istio gateway that's listening to the port `80` for `HTTP` traffic.
It doesn't filter for any specific host.
The `selector` field is used to "choose" which Istio Load Balancers will have this gateway assigned to.
The Istio `default` profile creates a Load Balancer in the namespace `istio-system` that has the label `istio: ingressgateway` set, allowing us to target that specific Load Balancer and assign this gateway resource to it.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
```
## VirtualService
The Virtual Service resources are used to route and filter the received traffic from the gateway resources, and route it towards the desired destination.
On this example we select the gateway `helloworld-gateway`, which is the [gateway that 's described in the `Gateway` section](#gateway).
On this resource, we are also not limiting the incoming traffic to any specific host, allowing for all the incoming traffic to go through the rules set.
Here we created a rule that will be applied on `HTTP` related traffic (including `HTTPS` and `HTTP2`) when the destination path is exactly `/helloworld`.
This traffic will be forwarded to the port `80` of the destination service `helloworld` (the full path URL equivalent would be `helloworld.$NAMESPACE.svc.cluster.local`).
There will be an internal URL rewrite set, as if the URL is not modified, it would attempt to reach to the `/helloworld` path from the Nginx deployment, which currently has no content and would result in an error code `404` (Not found).
Additionally, we apply a "fault", where a 90% of the traffic will be aborted and receive a 503 status code.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld-vs
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /helloworld
route:
- destination:
host: helloworld
port:
number: 80
rewrite:
uri: "/"
fault:
abort:
percentage:
value: 90
httpStatus: 503
```
# Walkthrough
## Deploy resources
Deploy the resources.
```shell
kubectl apply -f ./
```
```text
deployment.apps/helloworld-nginx created
gateway.networking.istio.io/helloworld-gateway created
service/helloworld created
virtualservice.networking.istio.io/helloworld-vs created
```
## Wait for the pods to be ready
Wait for the Nginx deployments to be up and ready.
```shell
kubectl get deployment helloworld-nginx -w
```
```text
NAME READY UP-TO-DATE AVAILABLE AGE
helloworld-nginx 1/1 1 1 12s
```
## Test the service
### Get LB IP
To perform the desired tests, we will need to obtain the IP Istio Load Balancer that we selected in the [Gateway section](#gateway).
On my environment, the IP is the `192.168.1.50`.
```shell
kubectl get svc -l istio=ingressgateway -A
```
```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.97.47.216 192.168.1.50 15021:31316/TCP,80:32012/TCP,443:32486/TCP 39h
```
### helloworld
We will use the `curl` command and feed it a template to provide us with the status code from the request.
Since the fault that we set had a 90% chance of triggering, if you are "unlucky", and get instantly the response from the backend, you might need to run the command multiple times in order to get the fault triggered.
```shell
curl -w @- -o /dev/null -s 192.168.1.21/helloworld <<'EOF'
http_code: %{http_code}\n
----------\n
time_total: %{time_total}\n
EOF
```
```text
http_code: 503
----------
time_total: 0.037870
```
From the command output, we can observe that the request resulted in a 503 status code.
We could continue sending curls until we receive a successful `200` status code.
## Cleanup
Finally, a cleanup from the resources deployed.
```shell
kubectl delete -f ./
```
```text
deployment.apps "helloworld-nginx" deleted
gateway.networking.istio.io "helloworld-gateway" deleted
service "helloworld" deleted
virtualservice.networking.istio.io "helloworld-vs" deleted
```
# Links of interest
- https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection-Abort

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
service: helloworld
spec:
ports:
- port: 80
name: http
selector:
app: helloworld

View File

@@ -1,19 +1,3 @@
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
@@ -35,7 +19,7 @@ spec:
rewrite:
uri: "/"
fault:
abort:
abort:
percentage:
value: 90
value: 90
httpStatus: 503

View File

@@ -1,7 +0,0 @@
# Continues from
- 01-hello_world_1_service_1_deployment
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection-Delay

View File

@@ -1,11 +0,0 @@
# Continues from
- 05a-FaultInjection-delay
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection-Abort
curl 192.168.1.50/helloworld -I

View File

@@ -0,0 +1,9 @@
## Description
This section focuses on applying configurations to test the resiliency from the deployed applications.
## Examples
- 01-FaultInjection-delay
- 02-FaultInjection-abort

View File

@@ -0,0 +1,353 @@
---
gitea: none
include_toc: true
---
# Description
This example deploys a Prometheus stack (Prometheus, Grafana, Alert Manager) through helm.
This will be used as a base for the future examples.
It's heavily recommended to have a base knowledge of Istio before proceeding to modify the settings according to your needs.
## Requisites
- Istio deployed and running at the namespace `istio-system`.
- Helm installed.
# Istio Files
## Gateway
Simple HTTP gateway.
It only allows traffic from the domain `my.home`, and it's subdomains.
Listens to the port 80 and expects HTTP (unencrypted) requests.
> **Note:**
> I assume the Gateway is already deployed, therefore on the walkthrough it's not mentioned nor specified. If you don't have a gateway, proceed to deploy one before continuing.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: local-gateway
namespace: default
spec:
selector:
istio: local-ingress
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "my.home"
- "*.filter.home"
```
## VirtualService.yaml
2 simple Virtual Services for the Grafana and Prometheus services/dashboards.
URL for each one are:
- prometheus.my.home
- grafana.my.home
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: default
labels:
app: grafana
spec:
hosts:
- "grafana.my.home"
gateways:
- default/local-gateway
http:
- route:
- destination:
host: prometheus-stack-01-grafana.observability.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
namespace: observability
labels:
app: prometheus
spec:
hosts:
- "prometheus.my.home"
gateways:
- default/local-gateway
http:
- route:
- destination:
host: prometheus-stack-01-kube-p-prometheus.observability.svc.cluster.local
port:
number: 9090
```
# Walkthrough
## Create Observability NS
```shell
kubectl create namespace
```
Placeholder namespace annotation, **istio-injection** will be enabled after the installation is completed.
If istio-injection is enabled, Helm installation will **fail**.
I have to check on what/why.
```shell
kubectl label namespaces observability istio-injection=disabled --overwrite=true
```
# PersistentVolume
I'm using a NFS provisioner, you can use whatever you want. (Optional)
On the file `stack_values.yaml` I specified that 2 volumes will be provisioned, one for Prometheus, and another one for AlertManager.
If you don't want to provision volumes, set that file to blank, or on the installation step, remove the line that specifies such line.
As well increased the retention from 10 days (default value), to 30 days, but since you won't have a volume, don't think that will be much of an issue for you...
## Installation
I will be installing Prometheus Operator through Helm.
```shell
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
```
```text
"prometheus-community" has been added to your repositories
```
```shell
helm show values prometheus-community/kube-prometheus-stack
```
```text
A lot of text, recommended to save the output on a file and you go through it (at latest use control+f or whatever other search option to find the things you might be interested on replacing/changing)
```
My stack_Values.yaml file is:
```yaml
prometheus:
prometheusSpec:
retention: "30d"
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: slow-nfs-01
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 50Gi
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: slow-nfs-01
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 10Gi
```
Besides the volumes mentioned in [here](#persistentvolume), increased the retention from 10 days to 30.
If you haven't configured a PersistentVolume storage, just skip the `--set` lines referencing such. Note that once the pod is restarted, all data will be lost.
```shell
helm install prometheus-stack-01 prometheus-community/kube-prometheus-stack \
-n observability \
--values ./src/stack_values.yaml
```
```text
NAME: prometheus-stack-01
LAST DEPLOYED: Sun Jan 14 22:34:11 2024
NAMESPACE: observability
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace observability get pods -l "release=prometheus-stack-01"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
```
### Check running pods in namespace
Everything seems to be deployed and working correctly.
```shell
kubectl get pods -n observability
```
```text
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-stack-01-kube-p-alertmanager-0 2/2 Running 0 73s
prometheus-prometheus-stack-01-kube-p-prometheus-0 2/2 Running 0 73s
prometheus-stack-01-grafana-69bd95649b-w67xg 3/3 Running 0 76s
prometheus-stack-01-kube-p-operator-b97d5f9cc-cm2pl 1/1 Running 0 76s
prometheus-stack-01-kube-state-metrics-554fd7bf8b-z62gv 1/1 Running 0 76s
prometheus-stack-01-prometheus-node-exporter-7bwbd 1/1 Running 0 76s
prometheus-stack-01-prometheus-node-exporter-dvqc6 1/1 Running 0 76s
prometheus-stack-01-prometheus-node-exporter-nfm5g 1/1 Running 0 76s
prometheus-stack-01-prometheus-node-exporter-ssfkb 1/1 Running 0 76s
```
### Enable Istio Injection
Let's enable back istio-injection on the namespace.
```shell
kubectl label namespaces observability istio-injection=enabled --overwrite=true
```
### Delete all pods so are recreated with the istio sidecar
To update the containers we will need to delete/recreate all of them.
```shell
kubectl delete pods -n observability --all
```
```text
pod "alertmanager-prometheus-stack-01-kube-p-alertmanager-0" deleted
pod "prometheus-prometheus-stack-01-kube-p-prometheus-0" deleted
pod "prometheus-stack-01-grafana-69bd95649b-w67xg" deleted
pod "prometheus-stack-01-kube-p-operator-b97d5f9cc-cm2pl" deleted
pod "prometheus-stack-01-kube-state-metrics-554fd7bf8b-z62gv" deleted
pod "prometheus-stack-01-prometheus-node-exporter-7bwbd" deleted
pod "prometheus-stack-01-prometheus-node-exporter-dvqc6" deleted
pod "prometheus-stack-01-prometheus-node-exporter-nfm5g" deleted
pod "prometheus-stack-01-prometheus-node-exporter-ssfkb" deleted
```
### Check pods status (again)
Everything seems to be deployed and working correctly.
```shell
kubectl get pods -n observability
```
```text
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-stack-01-kube-p-alertmanager-0 3/3 Running 0 44s
prometheus-prometheus-stack-01-kube-p-prometheus-0 3/3 Running 0 43s
prometheus-stack-01-grafana-69bd95649b-24v58 4/4 Running 0 46s
prometheus-stack-01-kube-p-operator-b97d5f9cc-5bdwh 2/2 Running 1 (43s ago) 46s
prometheus-stack-01-kube-state-metrics-554fd7bf8b-wjw4d 2/2 Running 2 (41s ago) 46s
prometheus-stack-01-prometheus-node-exporter-4266g 1/1 Running 0 46s
prometheus-stack-01-prometheus-node-exporter-lmxdj 1/1 Running 0 45s
prometheus-stack-01-prometheus-node-exporter-shd72 1/1 Running 0 45s
prometheus-stack-01-prometheus-node-exporter-wjhdr 1/1 Running 0 45s
```
### Gateway
I have my gateways already created (on this scenario I will be using the local gateway).
### VirtualService
I will create 2 Virtual Service entries, one for the Grafana dashboard, and another for the Prometheus dashboard:
- Prometheus dashboard URL: "prometheus.llb.filter.home"
- Grafana dashboard URL: "grafana.llb.filter.home"
```text
kubectl apply -f ./src/VirtualService.yaml
```
```shell
virtualservice.networking.istio.io/grafana-vs created
virtualservice.networking.istio.io/prometheus-vs created
```
## Prometheus
As a simple example of being able to access kubernetes metrics, you can run the following promql queries:
### Running pods per node
We can see the value "node=XXXX", which matches the node from our Kubernetes nodes available within the cluster.
```promql
kubelet_running_pods
```
### Running pods per namespace
Right now, on the namespace "observability" I have a total of 9 pods running.
```promql
sum(kube_pod_status_ready) by (namespace)
```
You can verify this by running:
```shell
kubectl get pods -n observability --no-headers=true | nl
```
```text
1 alertmanager-prometheus-stack-01-kube-p-alertmanager-0 3/3 Running 0 40m
2 prometheus-prometheus-stack-01-kube-p-prometheus-0 3/3 Running 0 40m
3 prometheus-stack-01-grafana-69bd95649b-24v58 4/4 Running 0 40m
4 prometheus-stack-01-kube-p-operator-b97d5f9cc-5bdwh 2/2 Running 1 (40m ago) 40m
5 prometheus-stack-01-kube-state-metrics-554fd7bf8b-wjw4d 2/2 Running 2 (40m ago) 40m
6 prometheus-stack-01-prometheus-node-exporter-4266g 1/1 Running 0 40m
7 prometheus-stack-01-prometheus-node-exporter-lmxdj 1/1 Running 0 40m
8 prometheus-stack-01-prometheus-node-exporter-shd72 1/1 Running 0 40m
9 prometheus-stack-01-prometheus-node-exporter-wjhdr 1/1 Running 0 40m
```
Which returns a total of 9 pods, with s status "running"
### Running containers per namespace
Currently, this is returning 18 containers running on the namespace **observability**.
```promql
sum(kube_pod_container_status_running) by (namespace)
```
Very much, listing again the pods running within the namespace, and just counting the values, I can confirm the total of containers running within the namespace, totals up to 18, matching the prometheus data.
```shell
kubectl get pods -n observability
```
```text
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-stack-01-kube-p-alertmanager-0 3/3 Running 0 45m
prometheus-prometheus-stack-01-kube-p-prometheus-0 3/3 Running 0 45m
prometheus-stack-01-grafana-69bd95649b-24v58 4/4 Running 0 45m
prometheus-stack-01-kube-p-operator-b97d5f9cc-5bdwh 2/2 Running 1 (45m ago) 45m
prometheus-stack-01-kube-state-metrics-554fd7bf8b-wjw4d 2/2 Running 2 (45m ago) 45m
prometheus-stack-01-prometheus-node-exporter-4266g 1/1 Running 0 45m
prometheus-stack-01-prometheus-node-exporter-lmxdj 1/1 Running 0 45m
prometheus-stack-01-prometheus-node-exporter-shd72 1/1 Running 0 45m
prometheus-stack-01-prometheus-node-exporter-wjhdr 1/1 Running 0 45m
```

View File

@@ -0,0 +1,16 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: local-gateway
namespace: default
spec:
selector:
istio: local-ingress
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "my.home"
- "*.filter.home"

View File

@@ -0,0 +1,37 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: default
labels:
app: grafana
spec:
hosts:
- "grafana.my.home"
gateways:
- default/local-gateway
http:
- route:
- destination:
host: prometheus-stack-01-grafana.observability.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
namespace: observability
labels:
app: prometheus
spec:
hosts:
- "prometheus.my.home"
gateways:
- default/local-gateway
http:
- route:
- destination:
host: prometheus-stack-01-kube-p-prometheus.observability.svc.cluster.local
port:
number: 9090

View File

@@ -0,0 +1,21 @@
prometheus:
prometheusSpec:
retention: "30d"
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: slow-nfs-01
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 50Gi
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: slow-nfs-01
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,60 @@
## Description
Through the use of Prometheus CRDs, we deploy a PodMonitor and ServiceMonitor objects, which will scrap metrics from the Envoy Proxies attached to each pod and Istiod deployment.
## Requirements
- Complete step [01-Create_Prometheus_Stack](../01-Create_Prometheus_Stack)
## Istio Metrics
Now that a functional Prometheus-Grafana-Alert manager set up.
The next step is to deploy scrapping Prometheus jobs/configs to gather:
- Envoy proxy metrics
- Istiod metrics.
> **Note**: \
> That the operators deployed are based off the [Istio Prometheus Operator Example](https://github.com/istio/istio/blob/1.20.2/samples/addons/extras/prometheus-operator.yaml)
```shell
kubectl create -f PrometheusIstioAgent.yaml
```
```text
servicemonitor.monitoring.coreos.com/istiod-metrics-monitor created
podmonitor.monitoring.coreos.com/envoy-stats-monitor created
```
To update the list of Prometheus targets, we can wait for a bit until it gets picked up automatically, idk give it a minute or two, get off the PC and grab some whatever or stretch your legs.
### Check Targets
Once the Prometheus pod is up and running again, if we access the website service, and access to the section **Status > Targets**, we can list all the available Targets.
Once there, I am able to see the following entries:
- **podMonitor/observability/envoy-stats-monitor/0 (15/15 up)**
- **serviceMonitor/observability/istiod-metrics-monitor/0 (2/2 up)**
### Check through Prometheus queries
Now, back to the **Graph** section, we can confirm if we are receiving metrics from **Istiod** and **Envoy**.
#### Istiod
Very simple and straightforward, the uptime for each one of the **Istiod** pods.
```promql
istiod_uptime_seconds
```
#### Envoy
Requests grouped by `destination_service_name`.
```promql
sum(istio_requests_total) by (destination_service_name)
```

View File

@@ -0,0 +1,66 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: kube-prometheus-stack-prometheus
release: prometheus-stack-01
name: istiod-metrics-monitor
namespace: observability
spec:
jobLabel: istio
targetLabels: [app]
selector:
matchExpressions:
- {key: istio, operator: In, values: [pilot]}
namespaceSelector:
any: true
endpoints:
- port: http-monitoring
interval: 15s
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: envoy-stats-monitor
labels:
app: kube-prometheus-stack-prometheus
release: prometheus-stack-01
namespace: observability
spec:
selector:
matchExpressions:
- {key: istio-prometheus-ignore, operator: DoesNotExist}
namespaceSelector:
any: true
jobLabel: envoy-stats
podMetricsEndpoints:
- path: /stats/prometheus
interval: 15s
relabelings:
- action: keep
sourceLabels: [__meta_kubernetes_pod_container_name]
regex: "istio-proxy"
- action: keep
sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape]
- action: replace
regex: (\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4})
replacement: '[$2]:$1'
sourceLabels:
- __meta_kubernetes_pod_annotation_prometheus_io_port
- __meta_kubernetes_pod_ip
targetLabel: __address__
- action: replace
regex: (\d+);((([0-9]+?)(\.|$)){4})
replacement: $2:$1
sourceLabels:
- __meta_kubernetes_pod_annotation_prometheus_io_port
- __meta_kubernetes_pod_ip
targetLabel: __address__
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name

View File

@@ -0,0 +1,58 @@
## Description
Shares some dashboards ready to use once Istio metrics are added to the Prometheus stack.
This is extremely simple to be honest.
## Requirements
- Complete step [02-Add_Istio_Scrapping_Metrics](../02-Add_Istio_Scrapping_Metrics)
## Grafana
### Default credentials
> **Note:** \
> Since Grafana has no storage/volume, **all changes will be lost**
User: admin
Password: prom-operator
Just check any dashboard to see if it's working correctly.
I personally recommend the dashboard:
- **Node Exporter / USE Method / Node**
Lists the resource utilization for each one of the Nodes.
IDK check whatever you want, there are some good predefined graphs already.
### Want to change crededntials?
Just log into the admin user and change whatever the hell you want.
Username, email, password.
Select different preferences..., whatever.
### Want to manage/create Users/Teams?
Select `Administration` > `Users and Access`.
There you will be able to create/manage **Users**, **Teams** and **Service Accounts**.
### Istio related Dashboards
Here is a list of ready to go Istio related dashboards that you might want to set up on your Grafana Deployment.
- https://grafana.com/grafana/dashboards/7630-istio-workload-dashboard/
- https://grafana.com/grafana/dashboards/7636-istio-service-dashboard/
- https://grafana.com/grafana/dashboards/7645-istio-control-plane-dashboard/
- https://grafana.com/grafana/dashboards/7639-istio-mesh-dashboard/
The dashboards where found here:
- https://grafana.com/orgs/istio/dashboards

15
13-Monitoring/README.md Normal file
View File

@@ -0,0 +1,15 @@
## Examples
- 01-Create_Prometheus_Stack
- 02-Add_Istio_Scrapping_Metrics
- 03-Grafana_Istio_Dashboards
This set of examples, steps 1, 2 and 3 are progressive and hey use resources set up in each one of the previous examples.
## Quick resume
Example 1 deploys the Prometheus stack.
Example 2 adds jobs to Prometheus to scrap Istio related metrics.
Example 3 just shares a couple of ready to use Istio related dashboards.

View File

@@ -108,7 +108,7 @@ spec:
## DestinationRule
As seen in the example [02-Traffic_management/09-HTTPS-backend](../../04-Backends/02-HTTPS-backend), where we configure Istio to use an `HTTPS` backend, the same configuration is applied on this case (yes, I am aware that a `ServiceEntry` is also a backend).
As seen in the example [04-Backends/03-HTTPS-backend)](../../04-Backends/03-HTTPS-backend), where we configure Istio to use an `HTTPS` backend, the same configuration is applied on this case (yes, I am aware that a `ServiceEntry` is also a backend).
For such, we deploy a `DestinationRule` setting to expect to terminate the TLS traffic, for the traffic with resource destination `github.com`, and port `8443`, which matches the settings set in our [ServiceEntry](#serviceentry) deployed.

View File

@@ -1,3 +1,15 @@
# Examples of mix configurations
## Description
This section is to put non-specific examples, being able to be as flexible as I desire.
## Examples
- 01-HTTPS-Gateway_Service_Entry
- Minecraft
# i.e of mix configurations
ie. gateway with HTTPS traffic through simple TLS, and a Service Entry as a backend.
ie. gateway with HTTPS traffic through simple TLS, and a Service Entry as a backend.

16
99-resources/README.md Normal file
View File

@@ -0,0 +1,16 @@
## Description
This section is reserved in case I need to provide any resource whatsoever.
For example, it contains the files used to create a Docker image for a Nginx server that contains an HTTPS web service, which is used for the examples shared.
## Contents
- HTTPS-NGINX-DOCKERFILE
## i.e of mix configurations
ie. gateway with HTTPS traffic through simple TLS, and a Service Entry as a backend.

View File

@@ -4,12 +4,14 @@ I have absolutely used as a reference and/or template other parties configuratio
I have tried to reference as much as possible as long it's relevant/useful for the reader.
Refer to the specific `README.md` in each example for more information, some documentation is still in progress.
Currently, the resources are under a relocation and the folders might contain things that don't _really match the topic_.
Refer to the specific `README.md` in each example for more information.
# Tree of folders
```shell
tree -d | grep -v src$
```
```text
├── 00-Troubleshooting
├── 01-Getting_Started
@@ -33,8 +35,8 @@ Currently, the resources are under a relocation and the folders might contain th
│   └── 07-HTTP-to-HTTPS-traffic-redirect
├── 04-Backends
│   ├── 01-Service_Entry
│   ├── 02-HTTPS-backend
│   ── 03-HTTPS-backend-with-MTLS
│   ├── 02-Outboud-Traffic-Policy
│   ── 03-HTTPS-backend
├── 05-Sidecar
│   ├── 01-ingress-proxy-forwarding
│   └── 02-egress-proxy
@@ -42,20 +44,24 @@ Currently, the resources are under a relocation and the folders might contain th
│   ├── 01-Envoy-add-response-headers
│   └── 02-envoy-logging
├── 08-AuthorizationPolicy
│   ├── 01-target-namespaces
│   ├── 02-target-service-accounts
│   └── 03-target-deployments
│   ├── 01-AuthorizationPolicy-Target-Namespaces
│   ├── 02-AuthorizationPolicy-Target-Service-Accounts
│   └── 03-AuthorizationPolicy-Target-Deployments
├── 09-Ingress
│   └── 01-Create-Istio-LoadBalancer
│   └── 01-Ingress-IstioOperator
│   └── IstioOperator
├── 10-mTLS_PeerAuthentication
│   ├── 01-disable-mTLS
│   ├── 02-portLevelMtls
│   └── 06-mTLS
│   ├── 01-mTLS
│   ├── 02-disable-mTLS
│   └── 03-mTLS-per-port-settings
├── 11-Fault_Injection
│   ├── 05a-FaultInjection-delay
│   └── 05b-FaultInjection-abort
│   ├── 01-FaultInjection-delay
│   └── 02-FaultInjection-abort
├── 12-CircuitBreaking
├── 13-monitoring
│   ├── 01-Create_Prometheus_Stack
│   ├── 02-Add_Istio_Scrapping_Metrics
│   └── 03-Grafana_Istio_Dashboards
├── 90-MixConfigs
│   ├── 01-HTTPS-Gateway_Service_Entry
│   └── Minecraft
@@ -64,6 +70,13 @@ Currently, the resources are under a relocation and the folders might contain th
└── HTTPS-NGINX-DOCKERFILE
```
#### "Why is 07 missing"
Previously there was a folder that got refactored.
Eventually the spot will be filled back.
Want to avoid renaming folders unless required as it could break link references within the documentation.
# Glossary