lost and found ( for me ? )

Kubernetes : deploy Ceph cluster as persistent volume

Here are logs when I set up Ceph clusters for persistent volumes.

Assume you have already set up Kubernetes cluster with Juju and MAAS.
http://lost-and-found-narihiro.blogspot.jp/2017/07/ubuntu-1604-deploy-kubernetes-
cluster.html

MAAS : MAAS Version 1.9.5+bzr4599-0ubuntu1 (14.04.1)
Juju : 2.2.2-xenial-amd64

Before deploying Ceph.

Juju GUI

K8s dashboard
No persistent volumes

[ deploy Ceph clusters with Juju ]

https://jujucharms.com/ceph-mon/
https://jujucharms.com/ceph-osd/

- Ceph mon

# juju deploy cs:ceph-mon -n 3

# juju status ceph-mon --format short

- ceph-mon/3: 192.168.40.42 (agent:allocating, workload:waiting)
- ceph-mon/4: 192.168.40.40 (agent:allocating, workload:waiting)
- ceph-mon/5: 192.168.40.41 (agent:allocating, workload:waiting)

# juju status ceph-mon --format short

- ceph-mon/3: 192.168.40.42 (agent:idle, workload:active)
- ceph-mon/4: 192.168.40.40 (agent:idle, workload:active)
- ceph-mon/5: 192.168.40.41 (agent:idle, workload:active)

Juju GUI after deploying ceph-mon.

- Ceph osd

# cat ceph-osd-config.yaml
ceph-osd:
   osd-devices: /dev/vdb

# juju deploy cs:ceph-osd -n 3 --config ceph-osd-config.yaml

# juju status ceph-osd --format short

- ceph-osd/0: 192.168.40.45 (agent:allocating, workload:waiting)
- ceph-osd/1: 192.168.40.43 (agent:allocating, workload:waiting)
- ceph-osd/2: 192.168.40.44 (agent:allocating, workload:waiting)

# juju add-relation ceph-mon ceph-osd

# juju status ceph-mon ceph-osd --format short

- ceph-mon/3: 192.168.40.42 (agent:executing, workload:active)
- ceph-mon/4: 192.168.40.40 (agent:executing, workload:active)
- ceph-mon/5: 192.168.40.41 (agent:executing, workload:active)
- ceph-osd/0: 192.168.40.45 (agent:executing, workload:active)
- ceph-osd/1: 192.168.40.43 (agent:executing, workload:active)
- ceph-osd/2: 192.168.40.44 (agent:executing, workload:active)



# juju add-relation kubernetes-master ceph-mon


# juju run-action kubernetes-master/0 create-rbd-pv name=test size=50

# juju ssh kubernetes-master/0


$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Available             rbd                      17s

$ kubectl get pvc
No resources found.

on Dashboard

Reference
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

create a persistent volume claim.
ubuntu@m-node05:~$ cat pv-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: test-pv-claim
spec:
 storageClassName: rbd
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 3M


ubuntu@m-node05:~$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Available             rbd                      20m

ubuntu@m-node05:~$ kubectl create -f pv-claim.yaml
persistentvolumeclaim "test-pv-claim" created

ubuntu@m-node05:~$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                   STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Bound     default/test-pv-claim   rbd                      20m

ubuntu@m-node05:~$ kubectl get pvc
NAME            STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
test-pv-claim   Bound     test      50M        RWO           rbd            7s
ubuntu@m-node05:~$


create a pod with PVC.
ubuntu@m-node05:~$ cat create-a-pod-with-pvc.yaml
kind: Pod
apiVersion: v1
metadata:
 name: task-pv-pod
spec:

 volumes:
   - name: task-pv-storage
     persistentVolumeClaim:
      claimName: test-pv-claim

 containers:
   - name: task-pv-container
     image: nginx
     ports:
       - containerPort: 80
         name: "http-server"
     volumeMounts:
     - mountPath: "/usr/share/nginx/html"
       name: task-pv-storage

ubuntu@m-node05:~$ kubectl create -f create-a-pod-with-pvc.yaml
pod "task-pv-pod" created

$ kubectl get pod task-pv-pod
NAME          READY     STATUS    RESTARTS   AGE
task-pv-pod   1/1       Running   0          48s

ubuntu@m-node05:~$ kubectl exec -it task-pv-pod -- /bin/bash

root@task-pv-pod:~# df -h | grep rbd
/dev/rbd0        46M  2.6M   44M   6% /usr/share/nginx/html

root@task-pv-pod:~# apt update;apt install curl -y

root@task-pv-pod:/# echo 'hello world' > /usr/share/nginx/html/index.html

root@task-pv-pod:/# curl http://127.0.0.1
hello world

accecc to a ceph-mon node.
$ juju ssh ceph-mon/3

ubuntu@m-node10:~$ sudo ceph health
HEALTH_OK

ubuntu@m-node10:~$ sudo ceph osd stat
    osdmap e15: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds

ubuntu@m-node10:~$ sudo ceph -s
   cluster 80009c18-729c-11e7-9d93-5254009250af
    health HEALTH_OK
    monmap e1: 3 mons at {m-node08=192.168.40.40:6789/0,m-node09=192.168.40.41:6789/0,m-node10=192.168.40.42:6789/0}
           election epoch 4, quorum 0,1,2 m-node08,m-node09,m-node10
    osdmap e14: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds
     pgmap v145: 64 pgs, 1 pools, 8836 kB data, 8 objects
           109 MB used, 58225 MB / 58334 MB avail
                 64 active+clean

ubuntu@m-node10:~$ sudo ceph
ceph> health
HEALTH_OK

ceph> status
   cluster 80009c18-729c-11e7-9d93-5254009250af
    health HEALTH_OK
    monmap e1: 3 mons at {m-node08=192.168.40.40:6789/0,m-node09=192.168.40.41:6789/0,m-node10=192.168.40.42:6789/0}
           election epoch 4, quorum 0,1,2 m-node08,m-node09,m-node10
    osdmap e14: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds
     pgmap v145: 64 pgs, 1 pools, 8836 kB data, 8 objects
           109 MB used, 58225 MB / 58334 MB avail
                 64 active+clean

ceph> exit

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.