commit
2f87a19545
@ -1,102 +0,0 @@
|
||||
# Continues from
|
||||
|
||||
- 05-hello_world_1_Service_Entry
|
||||
|
||||
# Description
|
||||
|
||||
On this example compares the behavior between setting up the MeshConfig `OutboundTrafficPolicy.mode` setting to `REGISTRY_ONLY` and `ALLOW_ANY`.
|
||||
|
||||
- ALLOW_ANY: Allows all egress/outbound traffic from the mesh.
|
||||
|
||||
- REGISTRY_ONLY: Restricted to services that figure in the service registry a and the ServiceEntry objects.
|
||||
|
||||
More info regarding this configuration at the pertinent documentation (https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode)
|
||||
|
||||
## Runthrough
|
||||
|
||||
### Set ALLOW_ANY outbound traffic policy
|
||||
|
||||
```shell
|
||||
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY
|
||||
```
|
||||
|
||||
### Deploy resources
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f ./
|
||||
service/helloworld created
|
||||
deployment.apps/helloworld-nginx created
|
||||
serviceentry.networking.istio.io/external-svc created
|
||||
gateway.networking.istio.io/helloworld-gateway created
|
||||
virtualservice.networking.istio.io/helloworld-vs created
|
||||
```
|
||||
|
||||
### Get LB IP
|
||||
|
||||
```shell
|
||||
$ kubectl get svc istio-ingressgateway -n istio-system
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
istio-ingressgateway LoadBalancer 10.97.47.216 192.168.1.50 15021:31316/TCP,80:32012/TCP,443:32486/TCP 39h
|
||||
```
|
||||
|
||||
### Test deployments
|
||||
|
||||
```shell
|
||||
$ curl 192.168.1.50/helloworld -I
|
||||
HTTP/1.1 200 OK
|
||||
server: istio-envoy
|
||||
date: Thu, 20 Apr 2023 18:03:18 GMT
|
||||
content-type: text/html
|
||||
content-length: 615
|
||||
last-modified: Tue, 28 Mar 2023 15:01:54 GMT
|
||||
etag: "64230162-267"
|
||||
accept-ranges: bytes
|
||||
x-envoy-upstream-service-time: 73
|
||||
```
|
||||
|
||||
```shell
|
||||
$ curl 192.168.1.50/external -I
|
||||
HTTP/1.1 200 OK
|
||||
date: Thu, 20 Apr 2023 18:03:24 GMT
|
||||
content-type: text/html
|
||||
content-length: 5186
|
||||
last-modified: Mon, 17 Mar 2014 17:25:03 GMT
|
||||
expires: Thu, 31 Dec 2037 23:55:55 GMT
|
||||
cache-control: max-age=315360000
|
||||
x-envoy-upstream-service-time: 228
|
||||
server: istio-envoy
|
||||
```
|
||||
|
||||
|
||||
### Test egress the helloworld deployment
|
||||
|
||||
It returns a 301 code, meaning that it was able to reach the destination, and it was attempted to redirect the traffic from HTTP to HTTPS.
|
||||
|
||||
```shell
|
||||
$ kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
server: envoy
|
||||
date: Thu, 20 Apr 2023 18:06:57 GMT
|
||||
content-type: text/html
|
||||
content-length: 169
|
||||
location: https://wikipedia.com/
|
||||
x-envoy-upstream-service-time: 65
|
||||
```
|
||||
|
||||
### Set REGISTRY_ONLY outbound traffic policy
|
||||
|
||||
```shell
|
||||
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
|
||||
```
|
||||
|
||||
### Test (again) egress the helloworld deployment
|
||||
|
||||
It returns a 502 code, meaning that it wasn't able to reach the destination.
|
||||
|
||||
```shell
|
||||
$ kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
|
||||
HTTP/1.1 502 Bad Gateway
|
||||
date: Thu, 20 Apr 2023 18:08:37 GMT
|
||||
server: envoy
|
||||
transfer-encoding: chunked
|
||||
```
|
@ -1,57 +0,0 @@
|
||||
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: helloworld
|
||||
labels:
|
||||
app: helloworld
|
||||
service: helloworld
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: http
|
||||
selector:
|
||||
app: helloworld
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: helloworld-nginx
|
||||
labels:
|
||||
app: helloworld
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: helloworld
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: helloworld
|
||||
spec:
|
||||
containers:
|
||||
- name: helloworld
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
imagePullPolicy: IfNotPresent #Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: external-svc
|
||||
spec:
|
||||
hosts:
|
||||
- help.websiteos.com
|
||||
# /websiteos/example_of_a_simple_html_page.htm
|
||||
# - http://help.websiteos.com/websiteos/example_of_a_simple_html_page.htm
|
||||
ports:
|
||||
- number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: DNS
|
||||
location: MESH_EXTERNAL
|
||||
---
|
@ -1,10 +0,0 @@
|
||||
|
||||
# Examples
|
||||
|
||||
- 01-Outboud-Traffic-Policy
|
||||
|
||||
|
||||
|
||||
## Additional
|
||||
|
||||
https://istio.io/latest/docs/tasks/observability/distributed-tracing/mesh-and-proxy-config/
|
@ -1,2 +0,0 @@
|
||||
https://raw.githubusercontent.com/istio/istio/release-1.17/samples/httpbin/sample-client/fortio-deploy.yaml
|
||||
|
3
.placeholder/13-WASM_Modules/README.md
Normal file
3
.placeholder/13-WASM_Modules/README.md
Normal file
@ -0,0 +1,3 @@
|
||||
https://github.com/istio-ecosystem/wasm-extensions
|
||||
|
||||
https://github.com/istio-ecosystem/wasm-extensions/tree/master/extensions/basic_auth
|
@ -24,9 +24,9 @@ Bear in mind that when Istio is communicating with resources externals to the me
|
||||
Also, policy enforcement is performed in the client side instead of the server side.
|
||||
|
||||
> **Note:**/
|
||||
> For more information regarding the `resolution` field or the `location` field, refer to the following official Istio documentations:
|
||||
> [ServiceEntry.Location](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Location)
|
||||
> [ServiceEntry.Resolution](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution)
|
||||
> For more information regarding the `resolution` field or the `location` field, refer to the following official Istio documentations:\
|
||||
> - [ServiceEntry.Location](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Location)\
|
||||
> - [ServiceEntry.Resolution](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution)
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
|
@ -1,57 +0,0 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: helloworld-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
- port:
|
||||
number: 443
|
||||
name: https
|
||||
protocol: HTTPS
|
||||
hosts:
|
||||
- "*"
|
||||
tls:
|
||||
credentialName: my-tls-cert-secret
|
||||
mode: SIMPLE
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: helloworld-vs
|
||||
spec:
|
||||
hosts:
|
||||
- "*"
|
||||
gateways:
|
||||
- helloworld-gateway
|
||||
http:
|
||||
- name: https-vs
|
||||
match:
|
||||
- port: 80
|
||||
- port: 443
|
||||
route:
|
||||
- destination:
|
||||
host: helloworld.default.svc.cluster.local
|
||||
port:
|
||||
number: 8443
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: helloworld
|
||||
namespace: default
|
||||
spec:
|
||||
host: helloworld.default.svc.cluster.local
|
||||
trafficPolicy:
|
||||
portLevelSettings:
|
||||
- port:
|
||||
number: 8443
|
||||
tls:
|
||||
mode: SIMPLE
|
25
04-Backends/02-Outboud-Traffic-Policy/Deployment.yaml
Executable file
25
04-Backends/02-Outboud-Traffic-Policy/Deployment.yaml
Executable file
@ -0,0 +1,25 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: helloworld-nginx
|
||||
labels:
|
||||
app: helloworld
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: helloworld
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: helloworld
|
||||
spec:
|
||||
containers:
|
||||
- name: helloworld
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
imagePullPolicy: IfNotPresent #Always
|
||||
ports:
|
||||
- containerPort: 80
|
14
04-Backends/02-Outboud-Traffic-Policy/Gateway.yaml
Executable file
14
04-Backends/02-Outboud-Traffic-Policy/Gateway.yaml
Executable file
@ -0,0 +1,14 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: helloworld-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway # use istio default controller
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
338
04-Backends/02-Outboud-Traffic-Policy/README.md
Executable file
338
04-Backends/02-Outboud-Traffic-Policy/README.md
Executable file
@ -0,0 +1,338 @@
|
||||
---
|
||||
gitea: none
|
||||
include_toc: true
|
||||
---
|
||||
|
||||
# Description
|
||||
|
||||
Based on the previous example where we configured an external service through a `ServiceEntry` object, this example compares the behavior between setting up the MeshConfig `OutboundTrafficPolicy.mode` setting to `REGISTRY_ONLY` and `ALLOW_ANY`.
|
||||
|
||||
- ALLOW_ANY: Allows all egress/outbound traffic from the mesh.
|
||||
|
||||
- REGISTRY_ONLY: Restricted to services that figure in the service registry a and the ServiceEntry objects.
|
||||
|
||||
More info regarding this configuration at the pertinent documentation (https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode)
|
||||
|
||||
> **Note:**\
|
||||
> For more information about the image used refer to [here](https://hub.docker.com/r/oriolfilter/https-nginx-demo)
|
||||
|
||||
# Based on
|
||||
|
||||
- [01-Service_Entry](../01-Service_Entry)
|
||||
|
||||
# Configuration
|
||||
|
||||
## Gateway
|
||||
|
||||
Deploys an Istio gateway that's listening to the port `80` for `HTTP` traffic.
|
||||
|
||||
It doesn't filter for any specific host.
|
||||
|
||||
The `selector` field is used to "choose" which Istio Load Balancers will have this gateway assigned to.
|
||||
|
||||
The Istio `default` profile creates a Load Balancer in the namespace `istio-system` that has the label `istio: ingressgateway` set, allowing us to target that specific Load Balancer and assign this gateway resource to it.
|
||||
|
||||
```shell
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: helloworld-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
```
|
||||
|
||||
> **Note:**\
|
||||
> The credentials resource is created further bellow through the [Walkthrough](#walkthrough) steps.
|
||||
|
||||
> **Note:**\
|
||||
> For more information regarding the TLS mode configuration, refer to the following [Istio documentation regarding the TLS mode field](https://istio.io/latest/docs/reference/config/networking/gateway/#ServerTLSSettings-TLSmode).
|
||||
|
||||
## VirtualService
|
||||
|
||||
This configuration hosts 2 backends, 1 being the deployed service `helloworld.default.svc.cluster.local`, which will be accessible through the URL path `/helloworld`.
|
||||
|
||||
The second service will be accessible through the URL path `/external`, and will use as a backend the deployed `ServiceEntry` object, as well it has a timeout setting of 3 seconds.
|
||||
|
||||
This destination is the service that contains the `HTTPS` deployment, running over the port `8443`
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: helloworld-vs
|
||||
spec:
|
||||
hosts:
|
||||
- "*"
|
||||
gateways:
|
||||
- helloworld-gateway
|
||||
http:
|
||||
- match:
|
||||
- uri:
|
||||
exact: /helloworld
|
||||
route:
|
||||
- destination:
|
||||
host: helloworld
|
||||
port:
|
||||
number: 80
|
||||
rewrite:
|
||||
uri: "/"
|
||||
|
||||
- timeout: 3s
|
||||
match:
|
||||
- uri:
|
||||
exact: "/external"
|
||||
route:
|
||||
- destination:
|
||||
host: help.websiteos.com
|
||||
port:
|
||||
number: 80
|
||||
rewrite:
|
||||
uri: "/websiteos/example_of_a_simple_html_page.htm"
|
||||
headers:
|
||||
request:
|
||||
set:
|
||||
HOST: "help.websiteos.com"
|
||||
```
|
||||
|
||||
## Service
|
||||
|
||||
The service will forward incoming HTTP TCP traffic from the port `80`, towards the deployment port `80`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: helloworld
|
||||
labels:
|
||||
app: helloworld
|
||||
service: helloworld
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: http
|
||||
selector:
|
||||
app: helloworld
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
Nginx deployment listens to port 80.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: helloworld-nginx
|
||||
labels:
|
||||
app: helloworld
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: helloworld
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: helloworld
|
||||
spec:
|
||||
containers:
|
||||
- name: helloworld
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
imagePullPolicy: IfNotPresent #Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
### ServiceEntry
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: external-svc
|
||||
spec:
|
||||
hosts:
|
||||
- help.websiteos.com
|
||||
ports:
|
||||
- number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: DNS
|
||||
location: MESH_EXTERNAL
|
||||
```
|
||||
|
||||
## ServiceEntry
|
||||
|
||||
This `ServiceEntry` resource, defines as a destination the URL `help.websiteos.com`.
|
||||
|
||||
Note that location is set to `MESH_EXTERNAL` and that the resolution is set to `DNS`, this means that the resource is external to ou `Istio Service Mesh`, and the URL will be resolved through `DNS`
|
||||
|
||||
Bear in mind that when Istio is communicating with resources externals to the mesh, `mTLS` is disabled.
|
||||
|
||||
Also, policy enforcement is performed in the client side instead of the server side.
|
||||
|
||||
> **Note:**/
|
||||
> For more information regarding the `resolution` field or the `location` field, refer to the following official Istio documentations:\
|
||||
> - [ServiceEntry.Location](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Location)\
|
||||
> - [ServiceEntry.Resolution](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry-Resolution)
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: external-svc
|
||||
spec:
|
||||
hosts:
|
||||
- help.websiteos.com
|
||||
ports:
|
||||
- number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: DNS
|
||||
location: MESH_EXTERNAL
|
||||
```
|
||||
|
||||
|
||||
# Walkthrough
|
||||
|
||||
## Set ALLOW_ANY outbound traffic policy
|
||||
|
||||
First step will be to have the cluster with the `meshConfig.outboundTrafficPolicy.mode` setting set to `ALLOW_ANY`.
|
||||
|
||||
In case you are not using a "free to destroy" sandbox, you should update the setting through the `IstioOperator` object.
|
||||
|
||||
```shell
|
||||
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY
|
||||
```
|
||||
|
||||
## Deploy resources
|
||||
|
||||
```shell
|
||||
kubectl apply -f ./
|
||||
```
|
||||
```text
|
||||
deployment.apps/helloworld-nginx created
|
||||
gateway.networking.istio.io/helloworld-gateway created
|
||||
service/helloworld created
|
||||
serviceentry.networking.istio.io/external-svc created
|
||||
virtualservice.networking.istio.io/helloworld-vs created
|
||||
```
|
||||
|
||||
## Get LB IP
|
||||
|
||||
```shell
|
||||
kubectl get svc istio-ingressgateway -n istio-system
|
||||
```
|
||||
|
||||
```text
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
istio-ingressgateway LoadBalancer 10.97.47.216 192.168.1.50 15021:31316/TCP,80:32012/TCP,443:32486/TCP 39h
|
||||
```
|
||||
|
||||
## Test deployments
|
||||
|
||||
```shell
|
||||
curl 192.168.1.50/helloworld -I
|
||||
```
|
||||
|
||||
```text
|
||||
HTTP/1.1 200 OK
|
||||
server: istio-envoy
|
||||
date: Sat, 14 Oct 2023 10:53:45 GMT
|
||||
content-type: text/html
|
||||
content-length: 615
|
||||
last-modified: Tue, 15 Aug 2023 17:03:04 GMT
|
||||
etag: "64dbafc8-267"
|
||||
accept-ranges: bytes
|
||||
x-envoy-upstream-service-time: 53
|
||||
```
|
||||
|
||||
```shell
|
||||
curl 192.168.1.50/external -I
|
||||
```
|
||||
|
||||
```text
|
||||
HTTP/1.1 200 OK
|
||||
date: Sat, 14 Oct 2023 10:54:13 GMT
|
||||
content-type: text/html
|
||||
content-length: 5186
|
||||
last-modified: Mon, 17 Mar 2014 17:25:03 GMT
|
||||
expires: Thu, 31 Dec 2037 23:55:55 GMT
|
||||
cache-control: max-age=315360000
|
||||
x-envoy-upstream-service-time: 306
|
||||
server: istio-envoy
|
||||
```
|
||||
|
||||
|
||||
## Test egress the helloworld deployment
|
||||
|
||||
It returns a 301 code, meaning that it was able to reach the destination, and it was attempted to redirect the traffic from HTTP to HTTPS.
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
|
||||
```
|
||||
|
||||
```text
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
server: envoy
|
||||
date: Sat, 14 Oct 2023 10:54:34 GMT
|
||||
content-type: text/html
|
||||
content-length: 169
|
||||
location: https://wikipedia.com/
|
||||
x-envoy-upstream-service-time: 61
|
||||
```
|
||||
|
||||
## Set REGISTRY_ONLY outbound traffic policy
|
||||
|
||||
```shell
|
||||
istioctl install --set profile=default -y --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
|
||||
```
|
||||
|
||||
In case you are not using a "free to destroy" sandbox, you should update the setting through the `IstioOperator` object.
|
||||
|
||||
## Test (again) egress the helloworld deployment
|
||||
|
||||
It returns a 502 code, meaning that it wasn't able to reach the destination.
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t "$(kubectl get pod -l app=helloworld | tail -n 1 | awk '{print $1}')" -- curl wikipedia.com -I
|
||||
```
|
||||
|
||||
```text
|
||||
HTTP/1.1 502 Bad Gateway
|
||||
date: Thu, 20 Apr 2023 18:08:37 GMT
|
||||
server: envoy
|
||||
transfer-encoding: chunked
|
||||
```
|
||||
|
||||
This allowed us to confirm how the setting `outboundTrafficPolicy.mode` influences the reachability of the traffic.
|
||||
|
||||
## Cleanup
|
||||
|
||||
```shell
|
||||
kubectl delete -f ./
|
||||
```
|
||||
```text
|
||||
deployment.apps "helloworld-nginx" deleted
|
||||
gateway.networking.istio.io "helloworld-gateway" deleted
|
||||
service "helloworld" deleted
|
||||
serviceentry.networking.istio.io "external-svc" deleted
|
||||
virtualservice.networking.istio.io "helloworld-vs" deleted
|
||||
```
|
||||
|
||||
# Links of Interest
|
||||
|
||||
- https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#controlled-access-to-external-services
|
||||
|
||||
- https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services
|
13
04-Backends/02-Outboud-Traffic-Policy/Service.yaml
Normal file
13
04-Backends/02-Outboud-Traffic-Policy/Service.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: helloworld
|
||||
labels:
|
||||
app: helloworld
|
||||
service: helloworld
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: http
|
||||
selector:
|
||||
app: helloworld
|
13
04-Backends/02-Outboud-Traffic-Policy/ServiceEntry.yaml
Normal file
13
04-Backends/02-Outboud-Traffic-Policy/ServiceEntry.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: external-svc
|
||||
spec:
|
||||
hosts:
|
||||
- help.websiteos.com
|
||||
ports:
|
||||
- number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: DNS
|
||||
location: MESH_EXTERNAL
|
16
.placeholder/07-MeshConfig/01-Outboud-Traffic-Policy/gateway.yaml → 04-Backends/02-Outboud-Traffic-Policy/VirtualService.yaml
Executable file → Normal file
16
.placeholder/07-MeshConfig/01-Outboud-Traffic-Policy/gateway.yaml → 04-Backends/02-Outboud-Traffic-Policy/VirtualService.yaml
Executable file → Normal file
@ -1,19 +1,3 @@
|
||||
# https://github.com/istio/istio/blob/master/samples/helloworld/helloworld-gateway.yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: helloworld-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway # use istio default controller
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
@ -1,20 +1,3 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: helloworld
|
||||
labels:
|
||||
app: helloworld
|
||||
service: helloworld
|
||||
spec:
|
||||
ports:
|
||||
- port: 8443
|
||||
name: https
|
||||
targetPort: 443
|
||||
protocol: TCP
|
||||
appProtocol: https
|
||||
selector:
|
||||
app: helloworld
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
13
04-Backends/03-HTTPS-backend/DestinationRule.yaml
Normal file
13
04-Backends/03-HTTPS-backend/DestinationRule.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: helloworld
|
||||
namespace: default
|
||||
spec:
|
||||
host: helloworld.default.svc.cluster.local
|
||||
trafficPolicy:
|
||||
portLevelSettings:
|
||||
- port:
|
||||
number: 8443
|
||||
tls:
|
||||
mode: SIMPLE
|
23
04-Backends/03-HTTPS-backend/Gateway.yaml
Executable file
23
04-Backends/03-HTTPS-backend/Gateway.yaml
Executable file
@ -0,0 +1,23 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: helloworld-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
- port:
|
||||
number: 443
|
||||
name: https
|
||||
protocol: HTTPS
|
||||
hosts:
|
||||
- "*"
|
||||
tls:
|
||||
credentialName: my-tls-cert-secret
|
||||
mode: SIMPLE
|
16
04-Backends/03-HTTPS-backend/Service.yaml
Normal file
16
04-Backends/03-HTTPS-backend/Service.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: helloworld
|
||||
labels:
|
||||
app: helloworld
|
||||
service: helloworld
|
||||
spec:
|
||||
ports:
|
||||
- port: 8443
|
||||
name: https
|
||||
targetPort: 443
|
||||
protocol: TCP
|
||||
appProtocol: https
|
||||
selector:
|
||||
app: helloworld
|
19
04-Backends/03-HTTPS-backend/VirtualService.yaml
Normal file
19
04-Backends/03-HTTPS-backend/VirtualService.yaml
Normal file
@ -0,0 +1,19 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: helloworld-vs
|
||||
spec:
|
||||
hosts:
|
||||
- "*"
|
||||
gateways:
|
||||
- helloworld-gateway
|
||||
http:
|
||||
- name: https-vs
|
||||
match:
|
||||
- port: 80
|
||||
- port: 443
|
||||
route:
|
||||
- destination:
|
||||
host: helloworld.default.svc.cluster.local
|
||||
port:
|
||||
number: 8443
|
@ -6,4 +6,12 @@ This section will focus on the interaction with the backend and routing the traf
|
||||
|
||||
01-Service_Entry
|
||||
02-HTTPS-backend
|
||||
03-HTTPS-backend-with-mTLS (TODO)
|
||||
03-Outboud-Traffic-Policy
|
||||
04-HTTPS-backend-with-mTLS (TODO)
|
||||
|
||||
## Heads up
|
||||
|
||||
On the example `03-Outboud-Traffic-Policy`, Istio's `meshConfig.outboundTrafficPolicy` will require to be modified.
|
||||
|
||||
On the example it's used the `istioctl install` command to set that up, as I assume you are testing this examples in a sandbox that you are free to "destroy".
|
||||
|
||||
|
353
13-Monitoring/01-Create_Prometheus_Stack/README.md
Normal file
353
13-Monitoring/01-Create_Prometheus_Stack/README.md
Normal file
@ -0,0 +1,353 @@
|
||||
---
|
||||
gitea: none
|
||||
include_toc: true
|
||||
---
|
||||
|
||||
# Description
|
||||
|
||||
This example deploys a Prometheus stack (Prometheus, Grafana, Alert Manager) through helm.
|
||||
|
||||
This will be used as a base for the future examples.
|
||||
|
||||
It's heavily recommended to have a base knowledge of Istio before proceeding to modify the settings according to your needs.
|
||||
|
||||
## Requisites
|
||||
|
||||
- Istio deployed and running at the namespace `istio-system`.
|
||||
- Helm installed.
|
||||
|
||||
# Istio Files
|
||||
|
||||
## Gateway
|
||||
|
||||
Simple HTTP gateway.
|
||||
|
||||
It only allows traffic from the domain `my.home`, and it's subdomains.
|
||||
|
||||
Listens to the port 80 and expects HTTP (unencrypted) requests.
|
||||
|
||||
> **Note:**
|
||||
> I assume the Gateway is already deployed, therefore on the walkthrough it's not mentioned nor specified. If you don't have a gateway, proceed to deploy one before continuing.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: local-gateway
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
istio: local-ingress
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "my.home"
|
||||
- "*.filter.home"
|
||||
```
|
||||
|
||||
## VirtualService.yaml
|
||||
|
||||
2 simple Virtual Services for the Grafana and Prometheus services/dashboards.
|
||||
|
||||
URL for each one are:
|
||||
|
||||
- prometheus.my.home
|
||||
|
||||
- grafana.my.home
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: grafana-vs
|
||||
namespace: default
|
||||
labels:
|
||||
app: grafana
|
||||
spec:
|
||||
hosts:
|
||||
- "grafana.my.home"
|
||||
gateways:
|
||||
- default/local-gateway
|
||||
http:
|
||||
- route:
|
||||
- destination:
|
||||
host: prometheus-stack-01-grafana.observability.svc.cluster.local
|
||||
port:
|
||||
number: 80
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: prometheus-vs
|
||||
namespace: observability
|
||||
labels:
|
||||
app: prometheus
|
||||
spec:
|
||||
hosts:
|
||||
- "prometheus.my.home"
|
||||
gateways:
|
||||
- default/local-gateway
|
||||
http:
|
||||
- route:
|
||||
- destination:
|
||||
host: prometheus-stack-01-kube-p-prometheus.observability.svc.cluster.local
|
||||
port:
|
||||
number: 9090
|
||||
```
|
||||
|
||||
# Walkthrough
|
||||
|
||||
## Create Observability NS
|
||||
|
||||
```shell
|
||||
kubectl create namespace
|
||||
```
|
||||
|
||||
Placeholder namespace annotation, **istio-injection** will be enabled after the installation is completed.
|
||||
|
||||
If istio-injection is enabled, Helm installation will **fail**.
|
||||
|
||||
I have to check on what/why.
|
||||
|
||||
```shell
|
||||
kubectl label namespaces observability istio-injection=disabled --overwrite=true
|
||||
```
|
||||
|
||||
# PersistentVolume
|
||||
|
||||
I'm using a NFS provisioner, you can use whatever you want. (Optional)
|
||||
|
||||
On the file `stack_values.yaml` I specified that 2 volumes will be provisioned, one for Prometheus, and another one for AlertManager.
|
||||
|
||||
If you don't want to provision volumes, set that file to blank, or on the installation step, remove the line that specifies such line.
|
||||
|
||||
As well increased the retention from 10 days (default value), to 30 days, but since you won't have a volume, don't think that will be much of an issue for you...
|
||||
|
||||
## Installation
|
||||
|
||||
I will be installing Prometheus Operator through Helm.
|
||||
|
||||
```shell
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
```
|
||||
|
||||
```text
|
||||
"prometheus-community" has been added to your repositories
|
||||
```
|
||||
|
||||
```shell
|
||||
helm show values prometheus-community/kube-prometheus-stack
|
||||
```
|
||||
|
||||
```text
|
||||
A lot of text, recommended to save the output on a file and you go through it (at latest use control+f or whatever other search option to find the things you might be interested on replacing/changing)
|
||||
```
|
||||
|
||||
My stack_Values.yaml file is:
|
||||
|
||||
```yaml
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
retention: "30d"
|
||||
storageSpec:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: slow-nfs-01
|
||||
accessModes: [ReadWriteOnce]
|
||||
resources:
|
||||
requests:
|
||||
storage: 50Gi
|
||||
alertmanager:
|
||||
alertmanagerSpec:
|
||||
storage:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: slow-nfs-01
|
||||
accessModes: [ReadWriteOnce]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
Besides the volumes mentioned in [here](#persistentvolume), increased the retention from 10 days to 30.
|
||||
|
||||
If you haven't configured a PersistentVolume storage, just skip the `--set` lines referencing such. Note that once the pod is restarted, all data will be lost.
|
||||
|
||||
```shell
|
||||
helm install prometheus-stack-01 prometheus-community/kube-prometheus-stack \
|
||||
-n observability \
|
||||
--values ./src/stack_values.yaml
|
||||
```
|
||||
|
||||
```text
|
||||
NAME: prometheus-stack-01
|
||||
LAST DEPLOYED: Sun Jan 14 22:34:11 2024
|
||||
NAMESPACE: observability
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
NOTES:
|
||||
kube-prometheus-stack has been installed. Check its status by running:
|
||||
kubectl --namespace observability get pods -l "release=prometheus-stack-01"
|
||||
|
||||
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
|
||||
```
|
||||
|
||||
### Check running pods in namespace
|
||||
|
||||
Everything seems to be deployed and working correctly.
|
||||
|
||||
```shell
|
||||
kubectl get pods -n observability
|
||||
```
|
||||
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
alertmanager-prometheus-stack-01-kube-p-alertmanager-0 2/2 Running 0 73s
|
||||
prometheus-prometheus-stack-01-kube-p-prometheus-0 2/2 Running 0 73s
|
||||
prometheus-stack-01-grafana-69bd95649b-w67xg 3/3 Running 0 76s
|
||||
prometheus-stack-01-kube-p-operator-b97d5f9cc-cm2pl 1/1 Running 0 76s
|
||||
prometheus-stack-01-kube-state-metrics-554fd7bf8b-z62gv 1/1 Running 0 76s
|
||||
prometheus-stack-01-prometheus-node-exporter-7bwbd 1/1 Running 0 76s
|
||||
prometheus-stack-01-prometheus-node-exporter-dvqc6 1/1 Running 0 76s
|
||||
prometheus-stack-01-prometheus-node-exporter-nfm5g 1/1 Running 0 76s
|
||||
prometheus-stack-01-prometheus-node-exporter-ssfkb 1/1 Running 0 76s
|
||||
```
|
||||
|
||||
### Enable Istio Injection
|
||||
|
||||
Let's enable back istio-injection on the namespace.
|
||||
|
||||
```shell
|
||||
kubectl label namespaces observability istio-injection=enabled --overwrite=true
|
||||
```
|
||||
|
||||
### Delete all pods so are recreated with the istio sidecar
|
||||
|
||||
To update the containers we will need to delete/recreate all of them.
|
||||
|
||||
```shell
|
||||
kubectl delete pods -n observability --all
|
||||
```
|
||||
|
||||
```text
|
||||
pod "alertmanager-prometheus-stack-01-kube-p-alertmanager-0" deleted
|
||||
pod "prometheus-prometheus-stack-01-kube-p-prometheus-0" deleted
|
||||
pod "prometheus-stack-01-grafana-69bd95649b-w67xg" deleted
|
||||
pod "prometheus-stack-01-kube-p-operator-b97d5f9cc-cm2pl" deleted
|
||||
pod "prometheus-stack-01-kube-state-metrics-554fd7bf8b-z62gv" deleted
|
||||
pod "prometheus-stack-01-prometheus-node-exporter-7bwbd" deleted
|
||||
pod "prometheus-stack-01-prometheus-node-exporter-dvqc6" deleted
|
||||
pod "prometheus-stack-01-prometheus-node-exporter-nfm5g" deleted
|
||||
pod "prometheus-stack-01-prometheus-node-exporter-ssfkb" deleted
|
||||
```
|
||||
|
||||
### Check pods status (again)
|
||||
|
||||
Everything seems to be deployed and working correctly.
|
||||
|
||||
```shell
|
||||
kubectl get pods -n observability
|
||||
```
|
||||
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
alertmanager-prometheus-stack-01-kube-p-alertmanager-0 3/3 Running 0 44s
|
||||
prometheus-prometheus-stack-01-kube-p-prometheus-0 3/3 Running 0 43s
|
||||
prometheus-stack-01-grafana-69bd95649b-24v58 4/4 Running 0 46s
|
||||
prometheus-stack-01-kube-p-operator-b97d5f9cc-5bdwh 2/2 Running 1 (43s ago) 46s
|
||||
prometheus-stack-01-kube-state-metrics-554fd7bf8b-wjw4d 2/2 Running 2 (41s ago) 46s
|
||||
prometheus-stack-01-prometheus-node-exporter-4266g 1/1 Running 0 46s
|
||||
prometheus-stack-01-prometheus-node-exporter-lmxdj 1/1 Running 0 45s
|
||||
prometheus-stack-01-prometheus-node-exporter-shd72 1/1 Running 0 45s
|
||||
prometheus-stack-01-prometheus-node-exporter-wjhdr 1/1 Running 0 45s
|
||||
```
|
||||
|
||||
### Gateway
|
||||
|
||||
I have my gateways already created (on this scenario I will be using the local gateway).
|
||||
|
||||
### VirtualService
|
||||
|
||||
I will create 2 Virtual Service entries, one for the Grafana dashboard, and another for the Prometheus dashboard:
|
||||
|
||||
- Prometheus dashboard URL: "prometheus.llb.filter.home"
|
||||
- Grafana dashboard URL: "grafana.llb.filter.home"
|
||||
|
||||
```text
|
||||
kubectl apply -f ./src/VirtualService.yaml
|
||||
```
|
||||
|
||||
```shell
|
||||
virtualservice.networking.istio.io/grafana-vs created
|
||||
virtualservice.networking.istio.io/prometheus-vs created
|
||||
```
|
||||
|
||||
## Prometheus
|
||||
|
||||
As a simple example of being able to access kubernetes metrics, you can run the following promql queries:
|
||||
|
||||
### Running pods per node
|
||||
|
||||
We can see the value "node=XXXX", which matches the node from our Kubernetes nodes available within the cluster.
|
||||
|
||||
```promql
|
||||
kubelet_running_pods
|
||||
```
|
||||
|
||||
### Running pods per namespace
|
||||
|
||||
Right now, on the namespace "observability" I have a total of 9 pods running.
|
||||
|
||||
```promql
|
||||
sum(kube_pod_status_ready) by (namespace)
|
||||
```
|
||||
|
||||
You can verify this by running:
|
||||
|
||||
```shell
|
||||
kubectl get pods -n observability --no-headers=true | nl
|
||||
```
|
||||
|
||||
```text
|
||||
1 alertmanager-prometheus-stack-01-kube-p-alertmanager-0 3/3 Running 0 40m
|
||||
2 prometheus-prometheus-stack-01-kube-p-prometheus-0 3/3 Running 0 40m
|
||||
3 prometheus-stack-01-grafana-69bd95649b-24v58 4/4 Running 0 40m
|
||||
4 prometheus-stack-01-kube-p-operator-b97d5f9cc-5bdwh 2/2 Running 1 (40m ago) 40m
|
||||
5 prometheus-stack-01-kube-state-metrics-554fd7bf8b-wjw4d 2/2 Running 2 (40m ago) 40m
|
||||
6 prometheus-stack-01-prometheus-node-exporter-4266g 1/1 Running 0 40m
|
||||
7 prometheus-stack-01-prometheus-node-exporter-lmxdj 1/1 Running 0 40m
|
||||
8 prometheus-stack-01-prometheus-node-exporter-shd72 1/1 Running 0 40m
|
||||
9 prometheus-stack-01-prometheus-node-exporter-wjhdr 1/1 Running 0 40m
|
||||
```
|
||||
|
||||
Which returns a total of 9 pods, with s status "running"
|
||||
|
||||
### Running containers per namespace
|
||||
|
||||
Currently, this is returning 18 containers running on the namespace **observability**.
|
||||
|
||||
```promql
|
||||
sum(kube_pod_container_status_running) by (namespace)
|
||||
```
|
||||
|
||||
Very much, listing again the pods running within the namespace, and just counting the values, I can confirm the total of containers running within the namespace, totals up to 18, matching the prometheus data.
|
||||
|
||||
```shell
|
||||
kubectl get pods -n observability
|
||||
```
|
||||
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
alertmanager-prometheus-stack-01-kube-p-alertmanager-0 3/3 Running 0 45m
|
||||
prometheus-prometheus-stack-01-kube-p-prometheus-0 3/3 Running 0 45m
|
||||
prometheus-stack-01-grafana-69bd95649b-24v58 4/4 Running 0 45m
|
||||
prometheus-stack-01-kube-p-operator-b97d5f9cc-5bdwh 2/2 Running 1 (45m ago) 45m
|
||||
prometheus-stack-01-kube-state-metrics-554fd7bf8b-wjw4d 2/2 Running 2 (45m ago) 45m
|
||||
prometheus-stack-01-prometheus-node-exporter-4266g 1/1 Running 0 45m
|
||||
prometheus-stack-01-prometheus-node-exporter-lmxdj 1/1 Running 0 45m
|
||||
prometheus-stack-01-prometheus-node-exporter-shd72 1/1 Running 0 45m
|
||||
prometheus-stack-01-prometheus-node-exporter-wjhdr 1/1 Running 0 45m
|
||||
```
|
16
13-Monitoring/01-Create_Prometheus_Stack/src/Gateway.yaml
Normal file
16
13-Monitoring/01-Create_Prometheus_Stack/src/Gateway.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: local-gateway
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
istio: local-ingress
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "my.home"
|
||||
- "*.filter.home"
|
@ -0,0 +1,37 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: grafana-vs
|
||||
namespace: default
|
||||
labels:
|
||||
app: grafana
|
||||
spec:
|
||||
hosts:
|
||||
- "grafana.my.home"
|
||||
gateways:
|
||||
- default/local-gateway
|
||||
http:
|
||||
- route:
|
||||
- destination:
|
||||
host: prometheus-stack-01-grafana.observability.svc.cluster.local
|
||||
port:
|
||||
number: 80
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: prometheus-vs
|
||||
namespace: observability
|
||||
labels:
|
||||
app: prometheus
|
||||
spec:
|
||||
hosts:
|
||||
- "prometheus.my.home"
|
||||
gateways:
|
||||
- default/local-gateway
|
||||
http:
|
||||
- route:
|
||||
- destination:
|
||||
host: prometheus-stack-01-kube-p-prometheus.observability.svc.cluster.local
|
||||
port:
|
||||
number: 9090
|
@ -0,0 +1,21 @@
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
retention: "30d"
|
||||
storageSpec:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: slow-nfs-01
|
||||
accessModes: [ReadWriteOnce]
|
||||
resources:
|
||||
requests:
|
||||
storage: 50Gi
|
||||
alertmanager:
|
||||
alertmanagerSpec:
|
||||
storage:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: slow-nfs-01
|
||||
accessModes: [ReadWriteOnce]
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
60
13-Monitoring/02-Add_Istio_Scrapping_Metrics/README.md
Normal file
60
13-Monitoring/02-Add_Istio_Scrapping_Metrics/README.md
Normal file
@ -0,0 +1,60 @@
|
||||
## Description
|
||||
|
||||
Through the use of Prometheus CRDs, we deploy a PodMonitor and ServiceMonitor objects, which will scrap metrics from the Envoy Proxies attached to each pod and Istiod deployment.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Complete step [01-Create_Prometheus_Stack](../01-Create_Prometheus_Stack)
|
||||
|
||||
## Istio Metrics
|
||||
|
||||
Now that a functional Prometheus-Grafana-Alert manager set up.
|
||||
|
||||
The next step is to deploy scrapping Prometheus jobs/configs to gather:
|
||||
|
||||
- Envoy proxy metrics
|
||||
- Istiod metrics.
|
||||
|
||||
> **Note**: \
|
||||
> That the operators deployed are based off the [Istio Prometheus Operator Example](https://github.com/istio/istio/blob/1.20.2/samples/addons/extras/prometheus-operator.yaml)
|
||||
|
||||
```shell
|
||||
kubectl create -f PrometheusIstioAgent.yaml
|
||||
```
|
||||
|
||||
```text
|
||||
servicemonitor.monitoring.coreos.com/istiod-metrics-monitor created
|
||||
podmonitor.monitoring.coreos.com/envoy-stats-monitor created
|
||||
```
|
||||
|
||||
To update the list of Prometheus targets, we can wait for a bit until it gets picked up automatically, idk give it a minute or two, get off the PC and grab some whatever or stretch your legs.
|
||||
|
||||
### Check Targets
|
||||
|
||||
Once the Prometheus pod is up and running again, if we access the website service, and access to the section **Status > Targets**, we can list all the available Targets.
|
||||
|
||||
Once there, I am able to see the following entries:
|
||||
|
||||
- **podMonitor/observability/envoy-stats-monitor/0 (15/15 up)**
|
||||
|
||||
- **serviceMonitor/observability/istiod-metrics-monitor/0 (2/2 up)**
|
||||
|
||||
### Check through Prometheus queries
|
||||
|
||||
Now, back to the **Graph** section, we can confirm if we are receiving metrics from **Istiod** and **Envoy**.
|
||||
|
||||
#### Istiod
|
||||
|
||||
Very simple and straightforward, the uptime for each one of the **Istiod** pods.
|
||||
|
||||
```promql
|
||||
istiod_uptime_seconds
|
||||
```
|
||||
|
||||
#### Envoy
|
||||
|
||||
Requests grouped by `destination_service_name`.
|
||||
|
||||
```promql
|
||||
sum(istio_requests_total) by (destination_service_name)
|
||||
```
|
@ -0,0 +1,66 @@
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
labels:
|
||||
app: kube-prometheus-stack-prometheus
|
||||
release: prometheus-stack-01
|
||||
name: istiod-metrics-monitor
|
||||
namespace: observability
|
||||
spec:
|
||||
jobLabel: istio
|
||||
targetLabels: [app]
|
||||
selector:
|
||||
matchExpressions:
|
||||
- {key: istio, operator: In, values: [pilot]}
|
||||
namespaceSelector:
|
||||
any: true
|
||||
endpoints:
|
||||
- port: http-monitoring
|
||||
interval: 15s
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: envoy-stats-monitor
|
||||
labels:
|
||||
app: kube-prometheus-stack-prometheus
|
||||
release: prometheus-stack-01
|
||||
namespace: observability
|
||||
spec:
|
||||
selector:
|
||||
matchExpressions:
|
||||
- {key: istio-prometheus-ignore, operator: DoesNotExist}
|
||||
namespaceSelector:
|
||||
any: true
|
||||
jobLabel: envoy-stats
|
||||
podMetricsEndpoints:
|
||||
- path: /stats/prometheus
|
||||
interval: 15s
|
||||
relabelings:
|
||||
- action: keep
|
||||
sourceLabels: [__meta_kubernetes_pod_container_name]
|
||||
regex: "istio-proxy"
|
||||
- action: keep
|
||||
sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape]
|
||||
- action: replace
|
||||
regex: (\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4})
|
||||
replacement: '[$2]:$1'
|
||||
sourceLabels:
|
||||
- __meta_kubernetes_pod_annotation_prometheus_io_port
|
||||
- __meta_kubernetes_pod_ip
|
||||
targetLabel: __address__
|
||||
- action: replace
|
||||
regex: (\d+);((([0-9]+?)(\.|$)){4})
|
||||
replacement: $2:$1
|
||||
sourceLabels:
|
||||
- __meta_kubernetes_pod_annotation_prometheus_io_port
|
||||
- __meta_kubernetes_pod_ip
|
||||
targetLabel: __address__
|
||||
- action: labeldrop
|
||||
regex: "__meta_kubernetes_pod_label_(.+)"
|
||||
- sourceLabels: [__meta_kubernetes_namespace]
|
||||
action: replace
|
||||
targetLabel: namespace
|
||||
- sourceLabels: [__meta_kubernetes_pod_name]
|
||||
action: replace
|
||||
targetLabel: pod_name
|
58
13-Monitoring/03-Grafana_Istio_Dashboards/README.md
Normal file
58
13-Monitoring/03-Grafana_Istio_Dashboards/README.md
Normal file
@ -0,0 +1,58 @@
|
||||
## Description
|
||||
|
||||
Shares some dashboards ready to use once Istio metrics are added to the Prometheus stack.
|
||||
|
||||
This is extremely simple to be honest.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Complete step [02-Add_Istio_Scrapping_Metrics](../02-Add_Istio_Scrapping_Metrics)
|
||||
|
||||
## Grafana
|
||||
|
||||
### Default credentials
|
||||
|
||||
> **Note:** \
|
||||
> Since Grafana has no storage/volume, **all changes will be lost**
|
||||
|
||||
User: admin
|
||||
Password: prom-operator
|
||||
|
||||
Just check any dashboard to see if it's working correctly.
|
||||
|
||||
I personally recommend the dashboard:
|
||||
|
||||
- **Node Exporter / USE Method / Node**
|
||||
|
||||
Lists the resource utilization for each one of the Nodes.
|
||||
|
||||
IDK check whatever you want, there are some good predefined graphs already.
|
||||
|
||||
### Want to change crededntials?
|
||||
|
||||
Just log into the admin user and change whatever the hell you want.
|
||||
|
||||
Username, email, password.
|
||||
|
||||
Select different preferences..., whatever.
|
||||
|
||||
### Want to manage/create Users/Teams?
|
||||
|
||||
Select `Administration` > `Users and Access`.
|
||||
|
||||
There you will be able to create/manage **Users**, **Teams** and **Service Accounts**.
|
||||
|
||||
### Istio related Dashboards
|
||||
|
||||
Here is a list of ready to go Istio related dashboards that you might want to set up on your Grafana Deployment.
|
||||
|
||||
- https://grafana.com/grafana/dashboards/7630-istio-workload-dashboard/
|
||||
- https://grafana.com/grafana/dashboards/7636-istio-service-dashboard/
|
||||
- https://grafana.com/grafana/dashboards/7645-istio-control-plane-dashboard/
|
||||
- https://grafana.com/grafana/dashboards/7639-istio-mesh-dashboard/
|
||||
|
||||
The dashboards where found here:
|
||||
|
||||
- https://grafana.com/orgs/istio/dashboards
|
||||
|
||||
|
15
13-Monitoring/README.md
Normal file
15
13-Monitoring/README.md
Normal file
@ -0,0 +1,15 @@
|
||||
## Examples
|
||||
|
||||
- 01-Create_Prometheus_Stack
|
||||
- 02-Add_Istio_Scrapping_Metrics
|
||||
- 03-Grafana_Istio_Dashboards
|
||||
|
||||
This set of examples, steps 1, 2 and 3 are progressive and hey use resources set up in each one of the previous examples.
|
||||
|
||||
## Quick resume
|
||||
|
||||
Example 1 deploys the Prometheus stack.
|
||||
|
||||
Example 2 adds jobs to Prometheus to scrap Istio related metrics.
|
||||
|
||||
Example 3 just shares a couple of ready to use Istio related dashboards.
|
@ -108,7 +108,7 @@ spec:
|
||||
|
||||
## DestinationRule
|
||||
|
||||
As seen in the example [02-Traffic_management/09-HTTPS-backend](../../04-Backends/02-HTTPS-backend), where we configure Istio to use an `HTTPS` backend, the same configuration is applied on this case (yes, I am aware that a `ServiceEntry` is also a backend).
|
||||
As seen in the example [04-Backends/03-HTTPS-backend)](../../04-Backends/03-HTTPS-backend), where we configure Istio to use an `HTTPS` backend, the same configuration is applied on this case (yes, I am aware that a `ServiceEntry` is also a backend).
|
||||
|
||||
For such, we deploy a `DestinationRule` setting to expect to terminate the TLS traffic, for the traffic with resource destination `github.com`, and port `8443`, which matches the settings set in our [ServiceEntry](#serviceentry) deployed.
|
||||
|
||||
|
19
README.md
19
README.md
@ -4,12 +4,14 @@ I have absolutely used as a reference and/or template other parties configuratio
|
||||
|
||||
I have tried to reference as much as possible as long it's relevant/useful for the reader.
|
||||
|
||||
Refer to the specific `README.md` in each example for more information, some documentation is still in progress.
|
||||
|
||||
Currently, the resources are under a relocation and the folders might contain things that don't _really match the topic_.
|
||||
Refer to the specific `README.md` in each example for more information.
|
||||
|
||||
# Tree of folders
|
||||
|
||||
```shell
|
||||
tree -d | grep -v src$
|
||||
```
|
||||
|
||||
```text
|
||||
├── 00-Troubleshooting
|
||||
├── 01-Getting_Started
|
||||
@ -33,7 +35,8 @@ Currently, the resources are under a relocation and the folders might contain th
|
||||
│ └── 07-HTTP-to-HTTPS-traffic-redirect
|
||||
├── 04-Backends
|
||||
│ ├── 01-Service_Entry
|
||||
│ ├── 02-HTTPS-backend
|
||||
│ ├── 02-Outboud-Traffic-Policy
|
||||
│ ├── 03-HTTPS-backend
|
||||
├── 05-Sidecar
|
||||
│ ├── 01-ingress-proxy-forwarding
|
||||
│ └── 02-egress-proxy
|
||||
@ -52,9 +55,13 @@ Currently, the resources are under a relocation and the folders might contain th
|
||||
│ ├── 02-disable-mTLS
|
||||
│ └── 03-mTLS-per-port-settings
|
||||
├── 11-Fault_Injection
|
||||
│ ├── 05a-FaultInjection-delay
|
||||
│ └── 05b-FaultInjection-abort
|
||||
│ ├── 01-FaultInjection-delay
|
||||
│ └── 02-FaultInjection-abort
|
||||
├── 12-CircuitBreaking
|
||||
├── 13-monitoring
|
||||
│ ├── 01-Create_Prometheus_Stack
|
||||
│ ├── 02-Add_Istio_Scrapping_Metrics
|
||||
│ └── 03-Grafana_Istio_Dashboards
|
||||
├── 90-MixConfigs
|
||||
│ ├── 01-HTTPS-Gateway_Service_Entry
|
||||
│ └── Minecraft
|
||||
|
Loading…
x
Reference in New Issue
Block a user