Added a secondary NFS provisioner (slow/hdd)

This commit is contained in:
savagebidoof 2024-01-13 05:23:32 +01:00
parent e3acd566c8
commit e0d3141580
2 changed files with 136 additions and 0 deletions

View File

@ -0,0 +1,12 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-storage
namespace: observability
spec:
storageClassName: slow-nfs-01
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

View File

@ -550,4 +550,128 @@ I have decided dump my old Jenkins architecture and rely on Skaffold, it's great
I will work on integrating it with Jenkins.
# EXTRA EXTRA
## Secondary NFS provisioner
I will add a **secondary NFS Provisioner** as a new storage class.
This storage class will be targeting a **"slow"/HDD** directory/drive.
Mainly intended for storing a bunch of logs, files, videos, or whatever.
Looking at you Prometheus 👀👀.
NFS server: nfs.filter.home
Target directory: **/resources/slow_nfs_provisioner** (this is made up, I don't want to share it.)
## NFS Server
### Create the directory
- [x] Done
### Update NFS service config to allow such directory to be used.
- [x] Done
## Deploy new NFS provisioner
```shell
NFS_SERVER=nfs.filter.home
NFS_EXPORT_PATH=/resources/slow_nfs_provisioner
```
```shell
helm -n nfs-provisioner install slow-nfs-01 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=${NFS_SERVER} \
--set nfs.path=${NFS_EXPORT_PATH} \
--set storageClass.defaultClass=true \
--set replicaCount=2 \
--set storageClass.name=slow-nfs-01 \
--set storageClass.provisionerName=slow-nfs-01
```
```text
NAME: slow-nfs-provisioner-01
LAST DEPLOYED: Fri Jan 12 23:32:25 2024
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
```
## Migrate some volumes to new dir
### Prometheus
(because he's the one filling my SSD.)
Copy files from (maintaining permissions):
**/resources/slow_nfs_provisioner/prometheus_generated_vol** to **/resources/slow_nfs_provisioner/prometheus_tmp**
This is mainly to "have them" already on the destination drive, folder name can be whatever.
### Create/Provision new PV
Since `path` value is immutable after creation, it will require to create a new volume, move the contents to the new volume, update the configs to match the new volume, recreate the workloads, then delete the old one.
Since this is my homelab, and I'm not bothered by some minutes of lost logs, I will instead, delete the old volume, delete the used deployment, create a new volume, then rename the folder `prometheus_tmp` I created on the previous step to replace the volume created (since the new volume is empty).
Then restart the Kubernetes deployment.
## Delete PVC
```shell
kubectl delete pvc -n observability prometheus-storage --force
```
This can take a bit since there are like 40GB of logs + it's still being used by the deployment.
```shell
kubectl get pvc -n observability prometheus-storage
```
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-storage Terminating pvc-698cf837-14a3-43ee-990a-5a34e1a396de 1Gi RWX nfs-01 94d
```
### Delete Deployment
```shell
kubectl delete deployment -n observability prometheus
```
```text
deployment.apps "prometheus" deleted
```
### Delete PV
```shell
kubectl delete pv pvc-698cf837-14a3-43ee-990a-5a34e1a396de
```
```text
persistentvolume "pvc-698cf837-14a3-43ee-990a-5a34e1a396de" deleted
```
### Create new volume.
```shell
kubectl create -f PrometheusVolume.yaml
```
```text
persistentvolumeclaim/prometheus-storage created
```
I later did some cleanup from the existent data cause 41GB was kind of too much for the usage I do (aka noticed that the container `prometheus-server` was taking forever to parse all the data).
Later will change the configurations to reduce the retention + data stored.
# Interesting
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#cross-namespace-data-sources