您好,欢迎访问代理记账网站
移动应用 微信公众号 联系我们

咨询热线 -

电话 15988168888

联系客服
  • 价格透明
  • 信息保密
  • 进度掌控
  • 售后无忧

存储卷与数据持久化

应用程序在处理请求时,可根据其对当前请求的处理是否受影响于此前的请求,将应用划分为有状态应用和无状态应用两种,微服务体系中,各种应用均被拆分成了众多微服务或更小的应用模块,因此往往会存在为数不少的有状态应用,当然也会存在数量可观的无状态应用,而对于有状态应用来说数据持久化几乎是必然之需 kubernetes 提供的存储卷属于 pod 资源级别,共享于 pod 内的所有容器,可用于在容器的文件系统之外存储应用程序的相关数据,甚至还可独立于 pod 的生命周期之外实现数据持久化

pod 挂载数据卷的实现方式

pod 内的容器名称空间是共享 pause 这个基础镜像的,挂载卷或者共享名称空间都是以这个基础架构镜像为基础,所以即便挂载数据卷也只是让这个 pause 基础镜像使用了挂载卷

存储类型分类

  1. 本地节点存储
    emptyDir: 临时目录,pod 删除数据被清除没有任何持久性,且可以是宿主机的内存
    hostPath: 宿主机目录
  2. 网络连接存储
    SAN: iSCSI\FC …
    NAS: nfs\cifs\http …
  3. 分布式存储
    glusterfs\rbd(ceph)\cdf\cephfs …
  4. 云存储(需要托管于云端)
    EBS\Azure Disk …
  5. CNI 接口
    flexVolume\flocker: 对自开发的存储插件提供的管理
  6. storagos
    为底层各类不同存储提供的中间层,统一对外
[root@master-0 ~]# kubectl explain pod.spec.volumes

persistentVolumeClaim

首先持久化存储申请(pvc)并不是存储卷,因为 k8s 提供了多种存储方式,每种存储方式使用起来各不相同,为了更加易用引入的 pvc 概念,对 pvc 来说只要明确指定需要使用的存储空间即可,让存储解耦实现存储即服务 VaaS,而 pvc 则需要与 pv 建立关联关系,pv 才是真正的存储空间

pv 的动态供给

pv 需要提前手动创建才能在 pvc 需要使用存储是供给,但如果要实现动态供给则需要定义好存储类而存储类一般是根据性能分类

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tUZM4Ep3-1624627773748)(https://raw.githubusercontent.com/ma823956028/image/master/picgo/20200906202732.png)]

EmptyDir

临时的存储卷,当 pod 删除时此种类型的存储卷会一并删除

[root@master-0 ~]# kubectl explain pod.spec.volumes.emptyDir
KIND:     Pod
VERSION:  v1

RESOURCE: emptyDir <Object>

DESCRIPTION:
    EmptyDir represents a temporary directory that shares a pod's lifetime.More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
    Represents an empty directory for a pod. Empty directory volumes supportownership management and SELinux relabeling.

FIELDS:
medium <string>      # 媒介,空字串默认为磁盘也可以使用 Memory 内存空间
    What type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string
    (default) or Memory. More info:https://kubernetes.io/docs/concepts/storage/volumes#emptydir

sizeLimit <string>   # 存储空间上限
    Total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memorymedium EmptyDir would be the minimum value between the SizeLimit specifiedhere and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info:http://kubernetes.io/docs/user-guide/volumes#emptydir

  1. 挂载存储卷模板

    [root@master-0 ~]# kubectl explain pod.spec.containers
    ... ...
    volumeMounts <[]Object>     # 挂载哪个或哪些挂载卷
        Pod volumes to mount into the container's filesystem. Cannot be updated.
    
    [root@master-0 ~]# kubectl explain pod.spec.containers.volumeMounts
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: volumeMounts <[]Object>
    
    DESCRIPTION:
        Pod volumes to mount into the container's filesystem. Cannot be updated.
        VolumeMount describes a mounting of a Volume within a container.
    
    FIELDS:
    mountPath <string> -required-    # 挂载路径
        Path within the container at which the volume should be mounted. Must not contain ':'.
    
    mountPropagation <string>
        mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
    
    name <string> -required-     # 挂载的卷名
        This must match the Name of a Volume.
    
    readOnly <boolean>           # 是不是只读挂载
        Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
    
    subPath <string>             # 挂载到子路径下
        Path within the volume from which the container's volume should be mounted.Defaults to "" (volume's root).
    
    subPathExpr <string>
        Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references
        $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive.
    
    
  2. 创建一个自主性类型的 pod 并挂载 emptyDir 存储卷

    [root@master-0 volume]# cat pod-vol-demo.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-demo
      namespace: default
      labels:
        app: myapp
        tier: frontend
    spec:
      containers:
        - name: myapp
          image: nginx
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          imagePullPolicy: IfNotPresent
          volumeMounts:                       # 在容器里挂载
          - name: html
            mountPath: /data/web/html
          - name: busbox
            image: busybox:latest
            command:
            - "/bin/sh"
            - "-c"
            - "sleep 36000"
          volumeMounts:                       # 只要是 pod 中的容器皆可挂载该卷
          - name: html
            mountPath: /data
      volumes:                                # 在 pod 中定义
      - name: html  
        emptyDir: {}                          # 意为空值
    [root@master-0 volume]# kubectl apply -f pod-vol-demo.yaml
    pod/pod-demo created
    
  3. 验证两个 pod 共享了该存储卷

    [root@master-0 volume]# kubectl exec -it pod-demo -c busbox -- /bin/sh
    / # echo $(date) >> /data/index.html
    / # cat /data/index.html
    
    Sun Sep 6 13:20:41 UTC 2020
    / # exit
    [root@master-0 volume]# kubectl exec -it pod-demo -c myapp -- /bin/bash
    root@pod-demo:/# cat /data/web/html/index.html
    
    Sun Sep 6 13:20:41 UTC 2020
    

gitRepo

这种类型的存储卷需要依赖宿主机的 git 命令,将远端 git 仓库 clone 下来,对于 pod 来讲只是定义和挂载了一个 emptyDir,只不过这个 emptyDir 中有数据,要注意的是如果远端仓库修改了数据并不会同步到挂载卷中,因为只是 clone 了当时的 git 仓库内容,同样容器内的改动也不会同步到 git 仓库,当然可以通过 sidecar 第二个容器的方式来实现同步远端数据,从 Kubernetes 1.12 版起,gitRepo存储卷已经被废弃,所以在之后的版本中若要使用它配置Pod 对象,建议读者借助初始化容器(InitContainer)将仓库中的数据复制到emptyDir存储卷上,并在主容器中使用此存储卷

HostPath

将 pod 所在的宿主机的某个路径与 pod 建立关联关系且脱离 pod 的命名空间,pod 删除时不会影响这种存储卷,所以当删除的 pod 调度到同一个节点并挂载时,对应的数据依然存在

[root@master-0 ~]# kubectl explain pod.spec.volumes.hostPath
KIND:     Pod
VERSION:  v1

RESOURCE: hostPath <Object>

DESCRIPTION:
     HostPath represents a pre-existing file or directory on the host machine
     that is directly exposed to the container. This is generally used for
     system agents or other privileged things that are allowed to see the host
     machine. Most containers will NOT need this. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

     Represents a host path mapped into a pod. Host path volumes do not support
     ownership management or SELinux relabeling.

FIELDS:
   path <string> -required-     # 路径
     Path of the directory on the host. If the path is a symlink, it will follow
     the link to the real path. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   type <string>                # 宿主机的挂载目录类型
     Type for HostPath Volume Defaults to "" More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

关于 hostPath.type 可以为以下几种类型

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tngV2rJm-1624627773750)(https://raw.githubusercontent.com/ma823956028/image/master/picgo/20200911160614.png)]

type 为空时则是为了兼容老版本的 k8s
chardevice 为字符类型

  1. 创建一个自主性类型的 pod 并挂载 hostpath 类型的存储卷

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-vol-hostpath
      namespace: default
    spec:
      containers:
      - name: myapp
        image: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        hostPath:
          path: /data/pod/volume1
          type: DirectoryOrCreate
    [root@master-0 ~]# mkdir /data/pod/volume1 -p
    pod/pod-vol-hostpath created
    
  2. 在 node1 和 node2 上分别创建 index.html 文件,进入 pod 查看文件

[root@master-0 ~]# kubectl get node
NAME              STATUS   ROLES    AGE    VERSION
master-0.shared   Ready    master   165d   v1.18.0
slave-0.shared    Ready    <none>   165d   v1.18.0
slave-1.shared    Ready    <none>   165d   v1.18.0
[root@master-0 ~]# ssh slave-0.shared
root@slave-0.shared's password:
Last login: Fri Sep 11 05:26:47 2020 from master-0.shared
[root@slave-0 ~]# mkdir -p /data/pod/volume1
[root@slave-0 volume1]# cat index.html
node0
[root@slave-0 volume1]# exit
登出
Connection to slave-0.shared closed.
[root@master-0 ~]# ssh slave-1.shared
root@slave-1.shared's password:
Last login: Fri Sep 11 05:26:08 2020 from master-0.shared
[root@slave-1 ~]#  mkdir -p /data/pod/volume1
[root@slave-1 ~]# cat index.html
node1
[root@slave-1 ~]# exit
登出
Connection to slave-1.shared closed.
[root@master-0 ~]# kubectl exec -it pod-vol-hostpath -- /bin/bash
root@pod-vol-hostpath:/# cat /usr/share/nginx/html/index.html
node1

PV and PVC

将存储功能简化部署操作,一个标准的生产和消费模型,客户不再关心真正提供存储的服务,注意如果创建了 pvc 而没有 pv,则改 pvc 会 pending,pv 和 pvc 是一一对应关系,但是可以被多个 pod 挂载访问,取决于 pvc 的配置

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TPDOliPb-1624627773750)(https://raw.githubusercontent.com/ma823956028/image/master/picgo/20200912095650.png)]

pv 模板

[root@master-0 ~]# kubectl explain pv.spec
KIND:     PersistentVolume
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines a specification of a persistent volume owned by the cluster.
     Provisioned by an administrator. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

     PersistentVolumeSpec is the specification of a persistent volume.

FIELDS:
   accessModes <[]string>           # 访问模式
     AccessModes contains all ways the volume can be mounted. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes

    ... ...

   capacity <map[string]string>        # 当前 pv 的容量,1024 单位应该是 Ei|Pi|Gi,而不是 100 单位的P|E|G...
     A description of the persistent volume's resources and capacity. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity

   ... ...
   persistentVolumeReclaimPolicy <string>           # PV 空间被释放时的处理机制,Retain(default):保持不动由管理员手动回收,Recycle:空间回收,删除存储卷目录下的所有文件,目前仅 nfs 和 hostpath 支持,delete:删除存储卷,仅部分云端存储支持,如 AWS,volumeNode:卷模型用于指定此卷可被用作文件系统还是裸盘,默认是文件系统,storageClassName:当前 pv 所属的storageClass,默认为空,mountOptions:挂在选项组成的列表,如 ro\soft 和 hard 等
     What happens to a persistent volume when released from its claim. Valid
     options are Retain (default for manually created PersistentVolumes), Delete
     (default for dynamically provisioned PersistentVolumes), and Recycle
     (deprecated). Recycle must be supported by the volume plugin underlying
     this PersistentVolume. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3JLRvNIT-1624627773751)(https://raw.githubusercontent.com/ma823956028/image/master/picgo/20200912140245.png)]

20200912141022

20200912142406

20200912143108

pvc 模板

[root@master-0 ~]# kubectl explain pvc
KIND:     PersistentVolumeClaim
VERSION:  v1

DESCRIPTION:
    PersistentVolumeClaim is a user's request for and claim to a persistent
    volume

FIELDS:
apiVersion <string>
    APIVersion defines the versioned schema of this representation of an
    object. Servers should convert recognized schemas to the latest internal
    value, and may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

kind <string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

metadata <Object>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

spec <Object>
    Spec defines the desired characteristics of a volume requested by a pod
    author. More info:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

status <Object>
    Status represents the current information/status of a persistent volume
    claim. Read-only. More info:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

[root@master-0 ~]# kubectl explain pvc.spec
KIND:     PersistentVolumeClaim
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
    Spec defines the desired characteristics of a volume requested by a pod
    author. More info:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

    PersistentVolumeClaimSpec describes the common attributes of storage
    devices and allows a Source for provider-specific attributes

FIELDS:
accessModes <[]string>       # 访问模型,必须是 pv 的子集
    AccessModes contains the desired access modes the volume should have. More
    info:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

dataSource <Object>          # 定义如果提供者具有卷快照功能,就会创建卷,并将数据恢复到卷中,反之不创建
    This field can be used to specify either: * An existing VolumeSnapshot
    object (snapshot.storage.k8s.io/VolumeSnapshot - Beta) * An existing PVC
    (PersistentVolumeClaim) * An existing custom resource/object that
    implements data population (Alpha) In order to use VolumeSnapshot object
    types, the appropriate feature gate must be enabled
    (VolumeSnapshotDataSource or AnyVolumeDataSource) If the provisioner or an
    external controller can support the specified data source, it will create a
    new volume based on the contents of the specified data source. If the
    specified data source is not supported, the volume will not be created and
    the failure will be reported as an event. In the future, we plan to support
    more data source types and the behavior of the provisioner may change.

resources <Object>           # 资源限制,至少多少空间
    Resources represents the minimum resources the volume should have. More
    info:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

selector <Object>            # 可以使用标签选择器来决定 pv
    A label query over volumes to consider for binding.

storageClassName <string>    # 存储类名称
    Name of the StorageClass required by the claim. More info:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1

volumeMode <string>          # 后端存储卷的模式,根据类型选择 pv
    volumeMode defines what type of volume is required by the claim. Value of
    Filesystem is implied when not included in claim spec.

volumeName <string>          # 后端存储卷名称,精确选择某个 pv,也可以用 selector
    VolumeName is the binding reference to the PersistentVolume backing this
    claim.

创建pv and pvc

  1. 准备 nfs 环境并准备目录

    [root@slave-0 vol]# mkdir {v1,v2,v3,v4,v5}
    [root@slave-0 vol]# ls
    v1  v2  v3  v4  v5
    [root@slave-0 vol]# vi /etc/exports
    [root@slave-0 vol]# exportfs -arv
    exporting 10.211.55.0/16:/data/vol/v4
    exporting 10.211.55.0/16:/data/vol/v3
    exporting 10.211.55.0/16:/data/vol/v2
    exporting 10.211.55.0/16:/data/vol/v1
    [root@slave-0 vol]# showmount -e
    Export list for slave-0.shared:
    /data/vol/v4 10.211.55.0/16
    /data/vol/v3 10.211.55.0/16
    /data/vol/v2 10.211.55.0/16
    /data/vol/v1 10.211.55.0/16
    
  2. 将 nfs 的目录定义为 pv

    [root@master-0 ~]# kubectl explain pv.spec.nfs
    KIND:     PersistentVolume
    VERSION:  v1
    
    RESOURCE: nfs <Object>
    
    DESCRIPTION:
        NFS represents an NFS mount on the host. Provisioned by an admin. More
        info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
    
        Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do
        not support ownership management or SELinux relabeling.
    
    FIELDS:
    path <string> -required-
        Path that is exported by the NFS server. More info:
        https://kubernetes.io/docs/concepts/storage/volumes#nfs
    
    readOnly <boolean>
        ReadOnly here will force the NFS export to be mounted with read-only
        permissions. Defaults to false. More info:
        https://kubernetes.io/docs/concepts/storage/volumes#nfs
    
    server <string> -required-
        Server is the hostname or IP address of the NFS server. More info:
        https://kubernetes.io/docs/concepts/storage/volumes#nfs
    
    [root@master-0 ~]# cat pv-nfs.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv001
    spec:
      nfs:
        path: /data/volumes/v1
        server: slave-0.shared
      accessModes: ["ReadWriteMany","ReadWriteOnce"]
      capacity:
        storage: 2Gi
    [root@master-0 ~]# kubectl apply -f pv-nfs.yaml
    persistentvolume/pv001 created
    [root@master-0 ~]# kubectl get pv
    NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    pv001   2Gi        RWO,RWX        Retain           Available                                   2m32s
    
  3. 创建和调用 pvc

    [root@master-0 ~]# kubectl explain pod.spec.volumes.persistentVolumeClaim
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: persistentVolumeClaim <Object>
    
    DESCRIPTION:
        PersistentVolumeClaimVolumeSource represents a reference to a
        PersistentVolumeClaim in the same namespace. More info:
        https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
    
        PersistentVolumeClaimVolumeSource references the user's PVC in the same
        namespace. This volume finds the bound PV and mounts that volume for the
        pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around
        another type of volume that is owned by someone else (the system).
    
    FIELDS:
      claimNam <string> -required-         # 指定 pvc 名字
        ClaimName is the name of a PersistentVolumeClaim in the same namespace as
        the pod using this volume. More info:
        https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
    
      readOnly <boolean>                  # 是否只读
        Will force the ReadOnly setting in VolumeMounts. Default false.
    [root@master-0 volume]# cat pvc-nfs.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: mypvc
      namespace: default
    spec:
      accessModes: ["ReadWriteMany"]
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-vol-vpc
      namespace: default
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: html
        persistentVolumeClaim:
          claimName: mypvc
    [root@master-0 volume]# kubectl apply -f pvc-nfs.yaml
    persistentvolumeclaim/mypvc created
    pod/pod-vol-vpc created
    [root@master-0 volume]# kubectl get pv
    NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
    pv001   2Gi        RWO,RWX        Retain           Bound    default/mypvc                           31m
    [root@master-0 volume]# kubectl get pvc
    NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    mypvc   Bound    pv001    2Gi        RWO,RWX                       32s
    
  • 注意 1.9 之前 pv 无论什么状态都可以被删除的,同时除了 pod 存放在节点其他资源如 pv|pvc 都是存放在 apiserver 的存储即 etcd 中

分享:

低价透明

统一报价,无隐形消费

金牌服务

一对一专属顾问7*24小时金牌服务

信息保密

个人信息安全有保障

售后无忧

服务出问题客服经理全程跟进