일전에 스토리지 게이트웨이로 연결했던거 온프레미스가 쿠버네티스 환경이라 PV로 NFS 마운트 하기로한다.
그림상에서 application server가 쿠버네티스 노드들이고
File Gatewat에 직접 NFS 마운트하는 PV와 PVC yaml을 작성해서 올려주기로 한다.
이전 스토리지 게이트웨이 구성글
https://raid-1.tistory.com/193
지금 온프레 상황
1. 일단 모든 노드에 nfs-utils가 깔려있어야 한다.
ansible로 일괄 밀어넣으면 좋지만
노드가 3개뿐이라 그냥 Mobaxterm으로 동시 밀어넣기 썼다.
pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-web
labels:
data: web
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: manual
persistentVolumeReclaimPolicy: Delete
nfs:
server: 192.168.0.110
path: /web
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-was
labels:
data: was
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: manual
persistentVolumeReclaimPolicy: Delete
nfs:
server: 192.168.0.110
path: /was
pvc.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-web
labels:
app: web
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: manual
selector:
matchLabels:
data: web
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-was
labels:
app: was
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: manual
selector:
matchLabels:
data: was
nodeport-service.yml
apiVersion: v1
kind: Service
metadata:
name: nodeport-service
spec:
type: NodePort
ports:
- name: "http-port"
protocol: "TCP"
port: 80
nodePort: 30080
- name: "https-port"
protocol: "TCP"
port: 443
nodePort: 30443
selector:
app: web
was-deployment.yml
metadata:
name: was
spec:
replicas: 3
selector:
matchLabels:
app: was
template:
metadata:
labels:
app: was
spec:
containers:
- name: was
image: hariniok/was:1
ports:
- containerPort: 8000
protocol: TCP
readinessProbe:
httpGet:
path: /proxy/
port: 8000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 2
failureThreshold: 2
volumeMounts:
- mountPath: /was
name: pvc-volume-was
volumes:
- name: pvc-volume-was
persistentVolumeClaim:
claimName: pv-claim-was
was-service.yml
apiVersion: v1
kind: Service
metadata:
name: was-service
labels:
app: was
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: was
web-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: hariniok/web:1
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 2
failureThreshold: 2
volumeMounts:
- mountPath: /web
name: pvc-volume-web
volumes:
- name: pvc-volume-web
'Kubernetes' 카테고리의 다른 글
파드의 동작 보증 기능 (0) | 2023.03.27 |
---|---|
Liveness Probe / Readiness probe 공부 (0) | 2023.03.23 |
[온프레미스 프로젝트] NFS + PV (0) | 2023.03.15 |
Helm 차트로 grafana + Prometheus 배포 + 쿠버네티스 리소스 모니터링 (0) | 2023.02.20 |
[Project] 온프레미스 중간 프로젝트 (0) | 2023.02.17 |