Velero安装与使用手册

Velero安装与使用手册

2023年6月29日发(作者:)

Velero安装与使⽤⼿册Velero安装与使⽤⼿册简介1.1 概览Velero(以前称为Heptio Ark)可以为您提供了备份和还原Kubernetes集群资源和持久卷的能⼒,你可以在公有云或本地搭建的私有云环境安装Velero,可以为你提供以下能⼒:备份集群数据,并在集群故障的情况下进⾏还原;将集群资源迁移到其他集群;将您的⽣产集群复制到开发和测试集群;Velero包含:在集群上运⾏的服务器端;在本地运⾏的命令⾏客户端;1.2 Velero⼯作原理每个Velero的操作(如按需备份,计划备份,还原)都是⾃定义资源,使⽤Kubernetes 并存储在 etcd中,Velero还包括处理⾃定义资源以执⾏备份,还原以及所有相关操作的控制器,可以备份或还原群集中的所有对象,也可以按类型,命名空间或标签过滤对象。Velero是kubernetes⽤来灾难恢复的理想选择,也是在集群上执⾏系统操作(如升级)之前对应⽤程序状态进⾏快照的理想选择。1.2.1 按需备份该备份操作:1. 将复制的Kubernetes对象的压缩⽂件上传到云对象存储中。2. 调⽤云提供程序API以创建持久卷的磁盘快照(如果指定)。您可以选择指定在备份期间执⾏的备份挂钩。例如,您可能需要在拍摄快照之前告诉数据库将其内存中的缓冲区刷新到磁盘。请注意,集群备份不是严格的原⼦备份,如果在备份时创建或编辑Kubernetes对象,则它们可能不包含在备份中,虽然捕获不⼀致信息的⼏率很低,但是有可能会发⽣这种现象。1.2.2 定时备份通过定时操作,您可以定期重复备份数据,第⼀次创建⽇程表时将执⾏第⼀次备份,随后的备份将按⽇程表的指定间隔进⾏,这些间隔由Cron表达式指定。定时备份保存的名称-,其中被格式化为YYYYMMDDhhmmss。1.2.3 备份还原通过还原操作,您可以从以前创建的备份中还原所有对象和持久卷,您还可以仅还原对象和持久卷的过滤⼦集,Velero⽀持多个命名空间重新映射。例如,在⼀次还原操作中,可以在命名空间“ def”下重新创建命名空间“ abc”中的对象,或在“ 456”之下重新创建名称空间“ 123”中的对象。还原的默认名称为-格式为YYYYMMDDhhmmss,您还可以指定⾃定义名称,恢复的对象还包括带有键/restore-name和值的标签。默认情况下,备份存储位置以读写模式创建。但是,在还原期间,您可以将备份存储位置配置为只读模式,这将禁⽤该存储位置的备份创建和删除,这对于确保在还原⽅案期间不会⽆意间创建或删除任何备份⾮常有⽤。您可以选择指定在还原期间或还原资源后执⾏的还原钩⼦。例如,您可能需要在数据库应⽤程序容器启动之前执⾏⾃定义数据库还原操作。1.2.4 备份⼯作流程运⾏时velero backup create test-backup:1. Velero客户端调⽤Kubernetes API服务器以创建Backup对象;2. 该BackupController将收到通知有新的Backup对象被创建并执⾏验证;3.

BackupController开始备份过程,它通过查询API服务器以获取资源来收集数据以进⾏备份;4.

BackupController将调⽤对象存储服务,例如,AWS S3 -上传备份⽂件。默认情况下,velero backup create⽀持任何持久卷的磁盘快照,您可以通过指定其他标志来调整快照,运⾏velero backup create --help可以查看可⽤的标志,可以使⽤--snapshot-volumes=false选项禁⽤快照。1.2.5 备份的API版本Velero使⽤Kubernetes API服务器的⾸选版本为group/resource备份资源,还原资源时,⽬标集群中必须存在相同的APIgroup/vesion,以便还原成功。例如,如果集群正在备份thingsAPI组下的gizmos资源,该资源包括things/v1alpha1,things/v1beta1以及things/v1三个API版本,并且服务器的⾸选版本为things/v1,那么gizmos的备份数据将从things/v1API端点获取;从该集群还原备份时,⽬标集群必须具有things/v1端点才能将gizmos的备份数据还原。请注意,things/v1 并不需要为⽬标集群中的⾸选版本,它只需要存在。1.2.6 设置备份过期时间创建备份时,可以通过添加标志来指定TTL(⽣存时间)--ttl,如果Velero检测到有备份资源已过期,它将删除以下相应备份数据:备份资源来⾃云对象存储的备份⽂件所有PersistentVolume快照所有关联的还原TTL标志使⽤户可以使⽤表格中以⼩时,分钟和秒为单位指定的值来指定备份保留期--ttl 24h0m0s。如果未指定,则将应⽤默认的TTL值30天1.2.7 同步对象存储Velero将对象存储视为资源标准的来源,它不断检查以确保始终存在正确的备份资源,如果存储桶中有格式正确的备份⽂件,但Kubernetes API中没有相应的备份资源,则Velero会将信息从对象存储同步到Kubernetes,这使还原功能可以在集群迁移⽅案中⼯作,在该⽅案中,新集群中不存在原始的备份对象。同样,如果备份对象存在于Kubernetes中,但不存在于对象存储中,则由于备份压缩包不再存在,它将从Kubernetes中删除。1.3 备份存储位置和卷快照位置Velero有两个⾃定义资源BackupStorageLocation和VolumeSnapshotLocation,⽤于配置Velero备份及其关联的持久卷快照的存储位置。BackupStorageLocation:定义为存储区,存储所有Velero数据的存储区中的前缀以及⼀组其他特定于提供程序的字段,后⾯部分会详细介绍该部分所包含的字段。VolumeSnapshotLocation:完全由提供程序提供的特定的字段(例如AWS区域,Azure资源组,Portworx快照类型等)定义,后⾯部分会详细介绍该部分所包含的字段。⽤户可以预先配置⼀个或多个可能的BackupStorageLocations对象,也可以预先配置⼀个或多个VolumeSnapshotLocations对象,并且可以在创建备份时选择应该存储备份和相关快照的位置。此配置设计⽀持许多不同的⽤法,包括:在单个Velero备份中创建不⽌⼀种持久卷的快照。例如,在同时具有EBS卷和Portworx卷的集群中在不同地区将数据备份到不同的存储中对于⽀持它的卷提供程序(例如Portworx),您可以将⼀些快照存储在本地集群中,⽽将其他快照存储在云中1.3.1 缺陷和注意事项Velero对每个提供商仅⽀持⼀组凭据,如果后端存储使⽤同⼀提供者,则不可能在不同的位置使⽤不同的凭据;卷快照仍然受提供商允许您创建快照的位置的限制,不⽀持跨公有云供应商备份带有卷的集群数据。例如,AWS和Azure不允许您在卷所在的区域中不同的可⽤区创建卷快照,如果您尝试使⽤卷快照位置(与集群卷所在的区域不同)来进⾏Velero备份,则备份将失败。每个Velero备份都只能有⼀个BackupStorageLocation,VolumeSnapshotLocation,不可能(到⽬前为⽌)将单个Velero备份同时发送到多个备份存储位置,或者将单个卷快照同时发送到多个位置。但是,如果跨位置的备份冗余很重要,则始终可以设置多个计划的备份,这些备份仅在所使⽤的存储位置有所不同。不⽀持跨提供商快照,如果您的集群具有多种类型的卷,例如EBS和Portworx,但VolumeSnapshotLocation仅配置了EBS,则Velero将仅对EBS卷进⾏快照。恢复数据存储在主Velero存储桶的prefix/subdirectory下,并在备份创建时将BackupStorageLocationc存储到与⽤户选择的存储桶相对应的存储桶。1.3.2 使⽤⽤例(1)在单个Velero备份中创建不⽌⼀种持久卷的快照创建快照存储位置velero snapshot-location create ebs-us-east-1 --provider aws --config region=us-east-1velero snapshot-location create portworx-cloud --provider portworx --config type=cloud创建备份任务velero backup create full-cluster-backup --volume-snapshot-locations ebs-us-east-1,portworx-cloud由于在此⽰例中,为我们提供后端存储的的两个供应商(ebs-us-east-1foraws和portworx-cloudfor

portworx)各⾃仅配置了⼀个可能的卷快照位置,因此Velero在创建备份时也可以不明确指定它们:velero backup create full-cluster-backup(2)在不同的地区将备份存储到不同的对象存储桶中创建备份存储位置velero backup-location create default --provider aws --bucket velero-backups --config region=us-east-1velero backup-location create s3-alt-region --provider aws --bucket velero-backups-alt --config region=us-west-1创建备份任务# The Velero server will automatically store backups in the backup storage location named "default" if# one is not specified when creating the backup. You can alter which backup storage location is used# by default by setting the --default-backup-storage-location flag on the `velero server` command (run# by the Velero deployment) to the name of a different backup storage backup create full-cluster-backup

或者velero backup create full-cluster-alternate-location-backup --storage-location s3-alt-region(3)对于公有云提供的存储卷,将⼀部分快照存储在本地,⼀部分存储在公有云创建快照存储位置、velero snapshot-location create portworx-local --provider portworx --config type=localvelero snapshot-location create portworx-cloud --provider portworx --config type=cloud创建备份# Note that since in this example you have two possible volume snapshot locations for the Portworx# provider, you need to explicitly specify which one to use when creating a backup. Alternately,# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by# the Velero deployment) to specify which location should be used for each provider by default, in# which case you don't need to specify it when creating a backup create local-snapshot-backup --volume-snapshot-locations portworx-local或者velero backup create cloud-snapshot-backup --volume-snapshot-locations portworx-cloud(4)使⽤存储位置创建存储位置velero backup-location create default --provider aws --bucket velero-backups --config region=us-west-1velero snapshot-location create ebs-us-west-1 --provider aws --config region=us-west-1创建备份任务# Velero will automatically use your configured backup storage location and volume snapshot location.# Nothing needs to be specified when creating a backup create full-cluster-backup2.安装Velero2.1 安装条件在启⽤DNS和容器联⽹的情况下访问Kubernetes集群v1.10或更⾼版本。已在本地安装kubectlVelero使⽤对象存储来存储备份和关联的⼯件,它还可以选择与受⽀持的块存储系统集成,以对您的持久卷进⾏快照。2.2 安装Velero2.2.1 下载velero⼆进制包可通过在****下载指定版本的velero⼆进制客户端安装包,本例下载的为v1.5.2版本wget /vmware-tanzu/velero/releases/download/v1.5.2/ -zxvf 2.2.2 安装对象存储minio(1)kubernetes集群内安装miniominio 官⽅推荐安装在k8s集群中,在上步解压的压缩包中⾥的examples/minio/包含了在k8s中安装minio的yaml⽂件,内容如下,可按照如下步骤修改minio的service类型为NodePort,进⾏安装:---apiVersion: v1kind: Namespacemetadata: name: velero---apiVersion: apps/v1kind: Deploymentmetadata: namespace: velero name: minio labels: component: miniospec: strategy: type: Recreate selector: matchLabels: component: minio template: metadata: labels: component: minio spec: volumes: - name: storage emptyDir: {} - name: config emptyDir: {} containers: - name: minio image: minio/minio:latest imagePullPolicy: IfNotPresent args: - server - /storage - --config-dir=/config env: - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 volumeMounts: - name: storage mountPath: "/storage" mountPath: "/storage" - name: config mountPath: "/config"---apiVersion: v1kind: Servicemetadata: namespace: velero name: minio labels: component: miniospec: # ClusterIP is recommended for production environments. # Change to NodePort if needed per documentation, # but only if you run Minio in a test/trial environment, for example with Minikube. type: NodePort ports: - port: 9000 targetPort: 9000 protocol: TCP selector: component: minio---apiVersion: batch/v1kind: Jobmetadata: namespace: velero name: minio-setup labels: component: miniospec: template: metadata: name: minio-setup spec: restartPolicy: OnFailure volumes: - name: config emptyDir: {} containers: - name: mc image: minio/mc:latest imagePullPolicy: IfNotPresent command: - /bin/sh - -c - "mc --config-dir=/config config host add velero minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero" volumeMounts: - name: config mountPath: "/config"执⾏如下命令安装miniokubectl apply -f examples/minio/查看pod状态,等待minio的状态为running。(2)kubernetes集群外安装minio若需要在不同kubernetes和存储池集群备份与恢复数据,需要将minio服务端安装在kubernetes集群外,保证在集群发⽣灾难性故障时,不会对备份数据产⽣影响,以下为通过⼆进制的⽅式在kubernetes集群外安装minio:在待安装minio的服务器上下载⼆进制包wget /server/minio/release/linux-amd64/miniochmod +x miniosudo mv minio /usr/local/bin/查看版本信息minio --version准备对象存储的磁盘,在此跳过此步骤使⽤systemd管理Minio服务,对于使⽤systemd init系统运⾏系统的⼈,请创建⽤于运⾏Minio服务的⽤户和组:sudo groupadd --system miniosudo useradd -s /sbin/nologin --system -g minio minio为/data(上述步骤准备好的磁盘挂载位置)⽬录提供minio⽤户所有权:sudo chown -R minio:minio /data/为Minio创建systemd服务单元⽂件:vim /etc/systemd/system/e[Unit]Description=MinioDocumentation=s=After=AssertFileIsExecutable=/usr/local/bin/minio[Service]WorkingDirectory=/dataUser=minioGroup=minioEnvironmentFile=-/etc/default/minioExecStartPre=/bin/bash -c "if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi"ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES# Let systemd restart this service alwaysRestart=always# Specifies the maximum file descriptor number that can be opened by this processLimitNOFILE=65536# Disable timeout logic and wait until process is stoppedTimeoutStopSec=infinitySendSIGKILL=no[Install]WantedBy=创建Minio环境⽂件/etc/default/minio:# Volume to be used for Minio _VOLUMES="/data"# Use if you want to run Minio on a custom _OPTS="--address :9000"# Access Key of the _ACCESS_KEY=minio# Secret key of the _SECRET_KEY=minio123MINIO_ACCESS_KEY:长度⾄少为3个字符的访问密钥;MINIO_SECRET_KEY:最少8个字符的密钥。重新加载systemd并启动minio服务:sudo systemctl daemon-reloadsudo systemctl start minio2.2.3 安装velero服务端(1)minio安装在kubernetes集群内时按照如下步骤安装velero服务端velero install --image velero/velero:v1.3.0 --plugins velero/velero-plugin-for-aws:v1.0.0 --provider aws --bucket velero --namespace velero --secret-file ./credentials-velero --velero-pod-cpu-request 200m --velero-pod-mem-request 200Mi --velero-pod-cpu-limit 1000m --velero-pod-mem-limit 1000Mi --use-volume-snapshots=false --use-restic --restic-pod-cpu-request 200m --restic-pod-mem-request 200Mi --restic-pod-cpu-limit 1000m --restic-pod-mem-limit 1000Mi --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=:9000(2)minio安装在kubernetes集群外时按照如下步骤安装velero服务端由于minio安装在集群外,pod⽆法访问外部服务,需要创建⼀个external类型的服务,⽤来访问外部minio,yaml内容如下:---apiVersion: v1kind: Namespacemetadata: name: velero---apiVersion: v1kind: Servicemetadata: name: minio namespace: velerospec: ports: - port: 9000---kind: EndpointsapiVersion: v1metadata: name: minio namespace: velerosubsets: - addresses: - ip: 192.168.10.149 ports: - port: 9000保存为,执⾏如下命令创建servicekubectl apply -f nio安装在集群外时,需⼿动创建名为velero的桶,可访问minioIp:port(192.168.10.149:9000),通过web页⾯创建velero存储桶。再按照minio安装在进群内的⽅式安装velero服务端,等待pod成功运⾏2.2.4 卸载velero如果您想从集群中完全卸载Velero,则以下命令将删除由velero install创建的所有资源:kubectl delete namespace/velero clusterrolebinding/velerokubectl delete crds -l component=velero3. 使⽤velero3.1 灾难恢复可通过定时和只读备份定期备份集群数据,在集群发⽣故障或升级失败时及时恢复。若在在发⽣某些意外情况(例如服务中断)的情况下需要回退到先前的状态,可使⽤Velero进⾏以下操作:(1)在集群上⾸次运⾏Velero服务器之后,请设置每⽇备份(根据需要替换命令):velero schedule create --schedule "0 7 * * *"这将创建⼀个名为的备份对象-,默认备份保留期限以TTL(有效期)表⽰,为30天(720⼩时);您可以通过--ttl参数更改备份过器时间。(2)发⽣故障或升级失败时,需要重新根据备份数据创建资源(3)将备份存储位置更新为只读模式(这可以防⽌在还原过程中在备份存储中创建或删除备份对象):kubectl patch backupstoragelocation --namespace velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'(4)使⽤最新的Velero备份还原还原数据:velero restore create --from-backup -(5)还原任务完成后,将备份位置修改为读写模式kubectl patch backupstoragelocation --namespace velero --type merge --patch '{"spec":{"accessMode":"ReadWrite"}}'3.2 集群迁移只要您将每个Velero实例指向相同的对象存储,Velero就能帮助您将资源从⼀个群集迁移到另⼀个群集。此⽅案假定您的群集由同⼀云提供商托管,请注意,Velero本⾝不⽀持跨云供应商迁移持久卷快照,如果要在云平台之间迁移卷数据,请启⽤ ,它将在⽂件系统级别备份卷内容。(1)*(集群1)*如果你尚未对集群进⾏备份操作,则需要⾸先备份整个集群(根据需要替换):velero backup create 默认备份保留期限以TTL(有效期)表⽰,为30天(720⼩时);您可以通过--ttl参数更改备份过器时间。(2)(集群2)使⽤和配置BackupStorageLocations和VolumeSnapshotLocations指向集群1使⽤的minio位置,并将使⽤模式配置为只读,可在创建存储位置时以--access-mode=ReadOnly参数指定模式。(3)*(集群2)*确保已在集群1创建Velero Backup对象,Velero资源与云存储中的备份⽂件已同步。velero backup describe **注意:**默认同步间隔是1分钟,您可以使⽤--backup-sync-period来设置Velero服务端的同步时间间隔。(4)*(集群2)*确认的备份存在且任务执⾏完成后,就可以使⽤以下⽅法还原所有内容:velero restore create --from-backup (5)验证迁移结果检查集群2的还原任务是否完成velero restore getvelero restore describe 如果遇到问题,请确保Velero在两个集群中所在的命名空间相同。3.3 过滤备份对象在备份资源时,velero⽀持按照不同的⽅式筛选备份对象,主要有以下两种⽅式:包括的备份对象:--include-namespaces:备份该命名空间下的所有资源,不包括集群资源--include-resources:要备份的资源类型--include-cluster-resources:是否备份集群资源此选项可以具有三个可能的值: true:包括所有群集范围的资源; false:不包括群集范围内的资源; nil (“⾃动”或不提供)备份或还原所有命名空间时,将包括集群范围的资源,默认值:true;使⽤命名空间过滤时,不包括集群范围的资源,默认值:false;有些特定的命名空间下的资源(例如PV),在备份pvc时仍会触发备份PV操作,除⾮使⽤--include-cluster-resources=false指明不备份集群资源--selector:通过标签选择匹配的资源备份备份时仅包括特定资源,不包括所有其他资源,如果同时包含通配符和特定资源,则通配符优先。(1)备份命名空间及其对象:velero backup create --include-namespaces (2)恢复两个名称空间及其对象。velero restore create --include-namespaces ,(3)备份集群中的所有deploymentvelero backup create --include-resources deployments(4)还原集群中的所有deployment和configmapvelero restore create --include-resources deployments,configmaps(5)备份特定命名空间中的deploymentvelero backup create --include-resources deployments --include-namespaces (6)备份整个集群,包括集群范围内的资源velero backup create (7)仅还原集群中的命名空间资源velero restore create --include-cluster-resources=false(8)备份名称空间并包括群集范围的资源velero backup create --include-namespaces --include-cluster-resources=true

(9)备份与标签选择器匹配的资源velero backup create --selector =不包括的备份对象--exclude-namespaces:备份时该命名空间下的资源不进⾏备份--exclude-resources:备份时该类型的资源不进⾏备份--/exclude-from-backup=true:当标签选择器匹配到该资源时,若该资源带有此标签,也不进⾏备份从备份中排除特定资源,通配符匹配的将不会被排除(10)从集群备份中排除命名空间kube-systemvelero backup create --exclude-namespaces kube-system(11)还原期间排除两个命名空间velero restore create --exclude-namespaces ,(12)备份时排除Secretvelero backup create --exclude-resources secrets(13)备份时排除secret和rolebindingsvelero backup create --exclude-resources secrets,rolebindings(14)从备份中排除特定项⽬即使单个项⽬与备份规范中定义的资源/名称空间/标签选择器匹配,也可以将其排除在备份之外,可通过如下命令进⾏标记:kubectl label -n / /exclude-from-backup=true(15)指定特定种类资源的备份顺序可通过使⽤–ordered-resources参数,按特定顺序备份特定种类的资源,需要指定资源名称和该资源的对象名称列表,资源对象名称以逗号分隔,其名称格式为“命名空间/资源名称”,对于集群范围资源,只需使⽤资源名称。映射中的键值对以分号分隔,资源类型是复数形式。velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns13.4 备份hooksVelero⽀持在备份任务执⾏之前和执⾏后在容器中执⾏⼀些预先设定好的命令。执⾏备份时,可以指定⼀个或多个命令,以在待备份的pod的容器中执⾏,可以将要执⾏的命令配置为在任何⾃定义动作处理之前(prehook)运⾏,或者在所有⾃定义动作完成并且完成了备份动作指定的任何其他项(post hook)之后运⾏。注意,钩⼦不在容器的shell内执⾏。有两种⽅法可以指定钩⼦:pod本⾝的注释声明和在定义Backup任务时的Spec中声明。3.4.1 在pod的注释中声明您可以在Pod上使⽤以下注释,以使Velero在备份Pod时执⾏钩⼦任务:Pre /container:将要执⾏命令的容器,默认为pod中的第⼀个容器,可选的。/command:要执⾏的命令,如果需要多个参数,请将该命令指定为JSON数组。例如:["/usr/bin/uname", "-a"]/on-error:如果命令返回⾮零退出代码如何处理。默认为“Fail”,有效值为“Fail”和“Continue”,可选的。/timeout:等待命令执⾏的时间,如果命令超过超时,则认为该挂钩失败的。默认为30秒,可选的。Post /container:将要执⾏命令的容器,默认为pod中的第⼀个容器,可选的。/command:要执⾏的命令,如果需要多个参数,请将该命令指定为JSON数组。例如:["/usr/bin/uname", "-a"]/on-error:如果命令返回⾮零退出代码如何处理。默认为“Fail”,有效值为“Fail”和“Continue”,可选的。/timeout:等待命令执⾏的时间,如果命令超过超时,则认为该挂钩失败的。默认为30秒,可选的。以下⽰例将引导您同时使⽤前挂钩和后挂钩来冻结⽂件系统,冻结⽂件系统对于确保在制作快照之前完成所有待处理的磁盘I / O操作很有⽤。(1)创建待备份的对象---apiVersion: v1kind: Namespacemetadata: name: nginx-example labels: app: nginx---kind: PersistentVolumeClaimapiVersion: v1metadata: name: nginx-logs namespace: nginx-example labels: app: nginxspec: # Optional: # storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 50Mi---apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment namespace: nginx-examplespec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: volumes: - name: nginx-logs persistentVolumeClaim: claimName: nginx-logs containers: - image: nginx:1.17.6 name: nginx ports: - containerPort: 80 volumeMounts: - mountPath: "/var/log/nginx" name: nginx-logs readOnly: false - image: ubuntu:bionic name: fsfreeze securityContext: privileged: true volumeMounts: - mountPath: "/var/log/nginx" name: nginx-logs readOnly: false command: - "/bin/bash" - "-c" - "sleep infinity"---apiVersion: v1kind: Servicemetadata: labels: app: nginx name: my-nginx namespace: nginx-examplespec: ports: - port: 80 targetPort: 80 selector: app: nginx type: LoadBalancer(2)为待备份的对象添加钩⼦任务的注解kubectl annotate pod -n nginx-example -l app=nginx /command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' /container=fsfreeze /command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' /container=fsfreeze(3)创建备份来测试pre hooks和post hooks,您可以使⽤Velero⽇志来验证任务是否正在运⾏并退出⽽没有错误。velero backup create nginx-hook-testvelero backup get nginx-hook-testvelero backup logs nginx-hook-test | grep hookCommand需要在任务中执⾏多个命令时,可参考以下形式添加注解,同⼀个命令间的多个参数使⽤,分割,多个命令之间通过&&分割/command='["/bin/bash", "-c", "echo hello > && echo goodbye > "]'3.4.2 在定义备份任务时声明请参考相关CRD资源信息⾥的Backup的定义⽰例3.5 还原备份数据3.5.1 将备份数据还原到与备份时不同的命名空间中Velero可以将资源还原到与其备份来源不同的命名空间中。可以使⽤--namespace-mappings参数来指定:velero restore create RESTORE_NAME --from-backup BACKUP_NAME --namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-25.5.2 ⽤户删除restore对象时会发⽣什么还原对象表⽰还原操作,还原对象有两种删除类型:1.

velero restore delete:此命令将删除代表它的⾃定义资源对象,以及其对应的⽇志和结果⽂件。但是,它不会从集群中删除由它创建的任何对象。2.

kubectl -n velero delete restore:该命令将删除代表还原的⾃定义资源对象,但不会从对象存储或集群中删除由还原期间创建的任何⽇志/结果⽂件。3.5.3 还原命令⾏选项要查看所有和还原相关的命令,请运⾏:velero restore --help ;要查看与特定命令关联的所有选项,请为该命令提供--help标志。例如,velero restore create --help将显⽰与create命令关联的所有选项。3.5.4 执⾏还原操作后,已有的NodePort类型的service如何处理默认情况下会删除⾃动分配的NodePort ,并且服务在还原后会获得新的⾃动分配的端⼝。可以使⽤last-applied-config注释⾃动检测到明确指定的NodePort,并在还原后保留。可以在服务定义上将NodePorts显式指定为. [*]。由于操作复杂性,并⾮总是可能在某些⼤型集群上显式设置nodePorts。Kubernetes官⽅⽂档指出,明确指定nodePorts时,⽤户需要注意防⽌端⼝冲突。未明确指定nodePort的集群在发⽣灾难时仍可能需要还原原始的NodePort,原有的⾃动分配的节点端⼝很可能已经在位于集群前端的负载均衡器上定义。如果更改了nodePort,则在灾难发⽣后更改负载均衡器上的所有这些nodePort是另⼀种复杂操作。Velero有⼀个参数,可让⽤户决定保留原来的nodePorts。velero restore create⼦命令具有

--preserve-nodeports标志保护服务nodePorts。此标志⽤于从备份中保留原始的nodePorts,可⽤作--preserve-nodeports或--preserve-nodeports=true如果给定此标志,则Velero在还原Service时不会删除nodePorts,⽽是尝试使⽤备份时写⼊的nodePorts。在以下情况下恢复时,尝试保留nodePorts可能会导致端⼝冲突:如果备份中的nodePort已经在⽬标集群上分配,则Velero如下所⽰打印错误⽇志,并继续还原操作。time="2020-11-23T12:58:31+03:00" level=info msg="Executing item action for services" logSource="pkg/restore/:1002" restore=velero/test-with-3-svc-25time="2020-11-23T12:58:31+03:00" level=info msg="Restoring Services with original NodePort(s)" cmd=_output/bin/linux/amd64/velero logSource="pkg/restore/service_:61" pluginName=velero restore=velero/test-with-3-svc-25time="2020-11-23T12:58:31+03:00" level=info msg="Attempting to restore Service: hello-service" logSource="pkg/restore/:1107" restore=velero/test-with-3-svc-25time="2020-11-23T12:58:31+03:00" level=error msg="error restoring hello-service: Service "hello-service" is invalid: [0].nodePort: Invalid value: 31536: provided port is already allocated" logSource="pkg/restore/:1170" restore=velero/test-with-3-svc-25如果备份中的nodePort不在⽬标集群的nodePort范围内,则Velero如下打印错误⽇志并继续还原操作。Kubernetes的默认nodePort范围是30000-32767,但是在⽰例集群上,nodePort范围是20000-22767,并尝试使⽤nodePort 31536恢复服务time="2020-11-23T13:09:17+03:00" level=info msg="Executing item action for services" logSource="pkg/restore/:1002" restore=velero/test-with-3-svc-25time="2020-11-23T13:09:17+03:00" level=info msg="Restoring Services with original NodePort(s)" cmd=_output/bin/linux/amd64/velero logSource="pkg/restore/service_:61" pluginName=velero restore=velero/test-with-3-svc-25time="2020-11-23T13:09:17+03:00" level=info msg="Attempting to restore Service: hello-service" logSource="pkg/restore/:1107" restore=velero/test-with-3-svc-25time="2020-11-23T13:09:17+03:00" level=error msg="error restoring hello-service: Service "hello-service" is invalid: [0].nodePort: Invalid value: 31536: provided port is not in the valid range. The range of valid ports is 20000-22767" logSource="pkg/restore/:1170" restore=velero/test-with-3-svc-253.5.5 更改pv/pvc的StorageClassVelero可以在还原过程中更改持久卷和持久卷声明的存储类别,要提前定义配置存储类映射,请在Velero命名空间中创建⼀个配置映射,如下所⽰:apiVersion: v1kind: ConfigMapmetadata: # any name can be used; Velero uses the labels (below) # to identify it rather than the name name: change-storage-class-config # must be in the velero namespace namespace: velero # the below labels should be used verbatim in your # ConfigMap. labels: # this value-less label identifies the ConfigMap as # config for a plugin (i.e. the built-in restore item action plugin) /plugin-config: "" # this label identifies the name and kind of plugin # that this ConfigMap is for. /change-storage-class: RestoreItemActiondata: # add 1+ key-value pairs here, where the key is the old # storage class name and the value is the new storage # class name. : Velero可以在还原过程中更新持久卷声明的选定节点注释,如果群集中不存在选定节点,则它将从PersistentVolumeClaim中删除选定节点注释。需要按照如下内容提前在Velero命名空间创建节点映射的配置,如下所⽰:apiVersion: v1kind: ConfigMapmetadata: # any name can be used; Velero uses the labels (below) # to identify it rather than the name name: change-pvc-node-selector-config # must be in the velero namespace namespace: velero # the below labels should be used verbatim in your # ConfigMap. labels: # this value-less label identifies the ConfigMap as # config for a plugin (i.e. the built-in restore item action plugin) /plugin-config: "" # this label identifies the name and kind of plugin # that this ConfigMap is for. /change-pvc-node-selector: RestoreItemActiondata: # add 1+ key-value pairs here, where the key is the old # node name and the value is the new node name. : 3.7 还原hooksVelero⽀持还原hooks,可以在还原任务执⾏前或还原过程之后执⾏的⾃定义操作。有以下两种定义形式:1. InitContainer Restore Hooks:这些将在待还原的Pod的应⽤程序容器启动之前将init容器添加到还原的pod中,以执⾏任何必要的设置。2. Exec Restore Hooks:可⽤于在已还原的Kubernetes

pod的容器中执⾏⾃定义命令或脚本。3.7.1 InitContainer Restore HooksInitContainer在还原之前,将使⽤初始化容器把hooks添加到容器中,您可以使⽤这些init容器来运⾏将Pod从其备份状态恢复到运⾏状态所需的任何操作。由resroe hook添加的InitContainer将是待还原的pod的podSpec中第⼀个init容器。如果pod已使⽤restic备份了卷,则将在InitContainer之后添加⼀个名称为restic-wait的InitContainer⽤来还原备份的卷 。注意:可以通过在集群中安装的任何定制的webhook来更改此顺序。有两种指定InitContainer Restore Hooks的⽅法:(1)在pod注释中指定以下是可以将InitContainer Restore Hooks添加到pod的注解/container-image:要添加的init容器的容器镜像/container-name:要添加的init容器的名称/command:将要在初始化容器中执⾏的任务或命令例:进⾏备份之前,请使⽤以下命令将注释添加到Pod:kubectl annotate pod -n /container-name=restore-hook /container-image=alpine:latest /command='["/bin/ash", "-c", "date"]'通过上⾯的注释,Velero在还原后将以下初始化容器添加到Pod:{ "command": [ "/bin/ash", "-c", "date" ], "image": "alpine:latest", "imagePullPolicy": "Always", "name": "restore-hook" ...}(2)在Restore的定义规范中声明InitContainer Restore Hooks也可以通过在RestoreSpec指定。请参阅有关还原API类型的⽂档, 以了解如何在还原规范中指定挂钩。例:以下是在RestoreSpec指定InitContainer Restore Hooks的⽰例apiVersion: /v1kind: Restoremetadata: name: r2 namespace: velerospec: backupName: b2 excludedResources: ... includedNamespaces: - '*' hooks: resources: - name: restore-hook-1 includedNamespaces: - app postHooks: - init: initContainers: - name: restore-hook-init1 image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c - echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz - name: restore-hook-init2 image: alpine:latest volumeMounts: - mountPath: /restores/pvc2-vm name: pvc2-vm command: - /bin/ash - -c - echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed上⾯所述的Restore实例创建后,将会有以下两个init容器添加到app命名空间中的每个pod中{ "command": [ "/bin/ash", "-c", "echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz" ], "image": "alpine:latest", "imagePullPolicy": "Always", "name": "restore-hook-init1", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/restores/pvc1-vm", "name": "pvc1-vm" } ] ...}{ "command": [ "/bin/ash", "-c", "echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed" ], "image": "alpine:latest", "imagePullPolicy": "Always", "name": "restore-hook-init2", "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/restores/pvc2-vm", "name": "pvc2-vm" } ] ...}3.7.2 Exec Restore Hooks此任务启动后,会使⽤Exec Restore Hook在已还原的pod的容器中执⾏命令,如果pod具有注释/command则这是将在容器中执⾏的唯⼀挂钩,在该Pod中不会执⾏来⾃RestoreSpec的任何挂钩。有两种指定Exec Restore Hooks的⽅法:(1)在pod的注解中指定以下是可以向到pod中添加指定Exec Restore Hooks的注释:/container:;执⾏hook的容器名称,默认为第⼀个容器,可选/command:将在容器中执⾏的命令,必填/on-error:如何处理执⾏失败,有效值为Fail和Continue,默认为Continue,使⽤Continue模式,仅记录执⾏失败;使⽤Fail模式时,将不会在⾃⾏其他的hook,还原的状态将为PartiallyFailed,可选/exec-timeout:开始执⾏后要等待多长时间,默认为30秒,可选/wait-timeout:等待容器准备就绪的时间,该时间应⾜够长,以使容器能够启动,并且同⼀容器中的所有先前的hook也要完成;等待超时在还原容器后开始,可能需要⼀段时间才能拉出镜像并安装卷,如果未设置,则还原将⽆限期等待,可选例:进⾏备份之前,请使⽤以下命令将注释添加到Podkubectl annotate pod -n /container=postgres /command='["/bin/bash", "-c", "psql < /backup/"]' /wait-timeout=5m /exec-timeout=45s /on-error=Continue(2)在RestoreSpec中指定还可以在RestoreSpec中指定Exec Restore Hooks,请参阅有关还原API类型的⽂档, 以了解如何在RestoreSpec中指定hooks。以下是在RestoreSpec中指定多个Exec Restore Hooks的⽰例,如本⽰例所⽰,在使⽤RestoreSpec时,可以为单个pod指定多个hook。apiVersion: /v1kind: Restoremetadata: name: r2 namespace: velerospec: backupName: b2 excludedResources: ... includedNamespaces: - '*' hooks: resources: - name: restore-hook-1 includedNamespaces: - app postHooks: - exec: execTimeout: 1m waitTimeout: 5m onError: Fail container: postgres command: - /bin/bash - '-c' - 'while ! pg_isready; do sleep 1; done' - exec: container: postgres waitTimeout: 6m execTimeout: 1m command: - /bin/bash - '-c' - 'psql < /backup/' - exec: container: sidecar command: - /bin/bash - '-c' - 'date > /start'还原任务开始执⾏后,所有hook将在匹配到的容器中顺序执⾏,在单个容器中执⾏的钩⼦的顺序遵循RestoreSpec的顺序;在此⽰例中,pg_isready将在psql之前运⾏,因为它们都适⽤于同⼀容器,并且pg_isready先被定义。如果pod中版含有多个Exec Restore Hooks,则在执⾏hook之前,所有待执⾏的hook的容器都会运⾏起来,但由于在执⾏过程中,所有的任务会按照顺序执⾏,则最后执⾏的date可能会等待⼏分钟才会执⾏。Velero保证单个pod中没有两个hook任务可以并⾏执⾏,但是在不同pod中执⾏的hook可以并⾏运⾏。3.8 在不同命名空间中恢复备份3.9 Velero中的容器存储接⼝快照⽀持此功能正在开发中,⽂档可能不是最新的,某些功能可能⽆法按预期运⾏。将容器存储接⼝(CSI)快照⽀持集成到Velero中,可使Velero使⽤备份和还原CSI⽀持的卷 。通过⽀持CSI快照API,Velero可以⽀持任何具有CSI驱动程序的卷提供商,⽽⽆需使⽤特定的Velero的插件。以下是使⽤Velero通过容器存储接⼝(CSI)制作快照的先决条件:1. 该集群是Kubernetes 1.17或更⾼版本;2. 集群运⾏的CSI驱动程序能够⽀持卷快照 ;3. 跨集群还原CSI卷快照时,⽬标集群中CSI驱动程序的名称与源集群上的CSI驱动程序名称必须相同,以确保CSI卷快照的跨集群可迁移性。确保Velero服务端已启⽤EnableCSI功能,此外,Velero ( )对于与CSI卷快照API集成是必需的。这两个都可以使⽤velero install命令添加。velero install --features=EnableCSI --plugins=,velero/velero-plugin-for-csi:v0.1.0 ...要在velero backup describe输出中包括与Velero备份关联的CSI对象的状态,请执⾏velero client config set features=EnableCSI以下是Velero CSI对volumesnapshot和volumesnapshotconten的保存策略,以及修改该策略的⽅法:1. 由Velero CSI创建的volumesnapshotclass中的关于卷快照的DeletionPolicy会被设置为Retain,因此当在删除备份的时候并不会删除备份过程中产⽣的快照,需要先volumesnapshotclass的策略修改为Delete,此时,删除volumenapshot对象将级联删除volumenapshotcontent和存储供应商中的快照。2. 在velero备份期间创建的处于未绑定到volumesnapshot对象的volumesnapshotcontent对象也将通过标签被发现,并在备份删除时被删除。3. Velero CSI插件(⽤于备份CSI⽀持的PVC)将在集群中选择具有相同驱动程序名称并在其上设置关于VolumeSnapshotClass的标签/csi-volumesnapshot-class,例如/csi-volumesnapshot-class: "true"⼯作原理 Velero的CSI⽀持不依赖Velero VolumeSnapshotter插件接⼝。 相反,Velero使⽤了BackupItemAction插件集合,这些插件⾸先对PersistentVolumeClaims起作⽤。当此BackupItemAction检测到PersistentVolumeClaims指向由CSI驱动程序⽀持的PersistentVolume时,它将选择具有相同驱动程序名称的VolumeSnapshotClass,该驱动程序名称带有/csi-volumesnapshot-class标签,然后使⽤PersistentVolumeClaim作为源来创建CSIVolumeSnapshot对象,该VolumeSnapshot对象与⽤作源的PersistentVolumeClaim位于相同的命名空间中。 然后,CSI外部快照控制器将看到VolumeSnapshot,并创建⼀个VolumeSnapshotContent对象,该对象是集群范围内的资源,它将指向存储系统中实际的基于磁盘的快照。external-snapshotter插件将调⽤CSI驱动程序的快照⽅法,该驱动程序将调⽤存储系统的API⽣成快照,⼀旦⽣成ID,并且存储系统将快照标记为可⽤于还原,则VolumeSnapshotContent对象的otHandle会被更新为⼀个字符串,并且oUse会被设置为true。 Velero将在备份⽣成的tar包中包含VolumeSnapshot和VolumeSnapshotContent对象,并将JSON⽂件中的所有VolumeSnapshots和VolumeSnapshotContents对象上传到对象存储系统。当Velero将备份同步到新集群中时,VolumeSnapshotContent对象也将同步到集群中,以便Velero可以适当地管理备份过期。

VolumeSnapshotContent的DeletionPolicy与VolumeSnapshotClass的是相同的,在VolumeSnapshotClass上将DeletionPolicy设置为Retain时,将在Velero备份的整个⽣命周期内将卷快照保留在存储系统中,并且在发⽣灾难的情况下(删除带有VolumeSnapshot对象的命名空间)可防⽌在存储系统中删除卷快照。 当Velero备份到期,VolumeSnapshot对象将被删除,VolumeSnapshotContent对象将被更新为具有DeletionPolicy的Delete,释放存储系统上的空间。3.10 更改RBAC权限 默认情况下,Velero使⽤ClusterRole的cluster-adminRBAC策略运⾏,这是为了确保Velero可以备份或还原集群中的任何内容。但是cluster-admin访问是完全开放的,它使Velero组件可以访问集群中的所有内容,你可以根据您的环境和安全需求,考虑是否配置具有更多限制性访问权限的其他RBAC策略。 注意:Role和RoleBindings与命名空间下的资源,PersistentVolume是集群资源,这意味着使⽤限制性⾓⾊和⾓⾊绑定对的任何备份或还原只能管理属于名称空间的资源。如果使⽤了限制性的Role和RoleBindings则⽆法对PersistentVolume进⾏备份,此时你可以通过另外的⼀个RBAC策略来只进⾏集群资源的备份与恢复。以下为设置⾓⾊和⾓⾊绑定的⽰例:apiVersion: /v1kind: Rolemetadata: namespace: YOUR_NAMESPACE_HERE name: ROLE_NAME_HERE labels: component: velerorules: - apiGroups: - verbs: - "*" resources: - "*"apiVersion: /v1kind: RoleBindingmetadata: name: ROLEBINDING_NAME_HEREsubjects: - kind: ServiceAccount name: YOUR_SERVICEACCOUNT_HEREroleRef: kind: Role name: ROLE_NAME_HERE apiGroup: 4 相关的CRD资源信息以下是具有某些功能的API类型的列表,您可以通过velero命令⾏使⽤json / yaml进⾏相应的修改与配置4.1 Backup通过调⽤BackupAPI来请求Velero服务器执⾏备份,创建备份后,Velero服务器⽴即启动备份过程;备份属于的API组和版本为/v1。以下为定义⼀个Backup对象的⽰例,其中包括了所有可能的字段:# Standard Kubernetes API Version declaration. sion: /v1# Standard Kubernetes Kind declaration. : Backup# Standard Kubernetes metadata. ta: # Backup name. May be any valid Kubernetes object name. Required. name: a # Backup namespace. Must be the namespace of the Velero server. Required. namespace: velero# Parameters about the backup. : # Array of namespaces to include in the backup. If unspecified, all namespaces are included. # Optional. includedNamespaces: - '*' # Array of namespaces to exclude from the backup. Optional. excludedNamespaces: - some-namespace # Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods') # or fully-qualified. If unspecified, all resources are included. Optional. # or fully-qualified. If unspecified, all resources are included. Optional. includedResources: - '*' # Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods') # or fully-qualified. Optional. excludedResources: - # Whether or not to include cluster-scoped resources. Valid values are true, false, and # null/unset. If true, all cluster-scoped resources are included (subject to included/excluded # resources and the label selector). If false, no cluster-scoped resources are included. If unset, # all cluster-scoped resources are included if and only if all namespaces are included and there are # no excluded namespaces. Otherwise, if there is at least one namespace specified in either # includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed # up are those associated with namespace-scoped resources included in the backup. For example, if a # PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is # cluster-scoped) would also be backed up. includeClusterResources: null # Individual objects must match this label selector to be included in the backup. Optional. labelSelector: matchLabels: app: velero component: server # Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and # AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as # a persistent volume provider is configured for Velero. snapshotVolumes: null # Where to store the tarball and logs. storageLocation: aws-primary # The list of locations in which to store volume snapshots created for this backup. volumeSnapshotLocations: - aws-primary - gcp-primary # The amount of time before this backup is eligible for garbage collection. If not specified, # a default value of 30 days will be used. The default can be configured on the velero server # by passing the flag --default-backup-ttl. ttl: 24h0m0s # Whether restic should be used to take a backup of all pod volumes by default. defaultVolumesToRestic: true # Actions to perform at different times during a backup. The only hook supported is # executing a command in a container in a pod using the pod exec API. Optional. hooks: # Array of hooks that are applicable to specific resources. Optional. resources: - # Name of the hook. Will be displayed in backup log. name: my-hook # Array of namespaces to which this hook applies. If unspecified, the hook applies to all # namespaces. Optional. includedNamespaces: - '*' # Array of namespaces to which this hook does not apply. Optional. excludedNamespaces: - some-namespace # Array of resources to which this hook applies. The only resource supported at this time is # pods. includedResources: - pods # Array of resources to which this hook does not apply. Optional. excludedResources: [] # This hook only applies to objects matching this label selector. Optional. labelSelector: matchLabels: app: velero component: server # An array of hooks to run before executing custom actions. Only "exec" hooks are supported. pre: pre: - # The type of hook. This must be "exec". exec: # The name of the container where the command will be executed. If unspecified, the # first container in the pod will be used. Optional. container: my-container # The command to execute, specified as an array. Required. command: - /bin/uname - -a # How to handle an error executing the command. Valid values are Fail and Continue. # Defaults to Fail. Optional. onError: Fail # How long to wait for the command to finish executing. Defaults to 30 seconds. Optional. timeout: 10s # An array of hooks to run after all custom actions and additional items have been # processed. Only "exec" hooks are supported. post: # Same content as pre above.# Status about the Backup. Users should not set any data : # The version of this Backup. The only version supported is 1. version: 1 # The date and time when the Backup is eligible for garbage collection. expiration: null # The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed. phase: "" # An array of any validation errors encountered. validationErrors: null # Date/time when the backup started being processed. startTimestamp: 2019-04-29T15:58:43Z # Date/time when the backup finished being processed. completionTimestamp: 2019-04-29T15:58:56Z # Number of volume snapshots that Velero tried to create for this backup. volumeSnapshotsAttempted: 2 # Number of volume snapshots that Velero successfully created for this backup. volumeSnapshotsCompleted: 1 # Number of warnings that were logged by the backup. warnings: 2 # Number of errors that were logged by the backup. errors: 04.2 Restore 通过此API可以创建还原对象的实例,⽤来通过备份⽂件还原对应的备份数据。创建后,Velero服务器将⽴即启动还原过程;还原属于的API组和版本为/v1。 以下为定义⼀个还原对象的⽰例⽂件,其中包含了每⼀个可能的字段;# Standard Kubernetes API Version declaration. sion: /v1# Standard Kubernetes Kind declaration. : Restore# Standard Kubernetes metadata. ta: # Restore name. May be any valid Kubernetes object name. Required. name: a-very-special-backup-3333 # Restore namespace. Must be the namespace of the Velero server. Required. namespace: velero# Parameters about the restore. : # BackupName is the unique name of the Velero backup to restore from. backupName: a-very-special-backup # Array of namespaces to include in the restore. If unspecified, all namespaces are included. # Optional. # Optional. includedNamespaces: - '*' # Array of namespaces to exclude from the restore. Optional. excludedNamespaces: - some-namespace # Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods') # or fully-qualified. If unspecified, all resources are included. Optional. includedResources: - '*' # Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods') # or fully-qualified. Optional. excludedResources: - # Whether or not to include cluster-scoped resources. Valid values are true, false, and # null/unset. If true, all cluster-scoped resources are included (subject to included/excluded # resources and the label selector). If false, no cluster-scoped resources are included. If unset, # all cluster-scoped resources are included if and only if all namespaces are included and there are # no excluded namespaces. Otherwise, if there is at least one namespace specified in either # includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed # up are those associated with namespace-scoped resources included in the restore. For example, if a # PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is # cluster-scoped) would also be backed up. includeClusterResources: null # Individual objects must match this label selector to be included in the restore. Optional. labelSelector: matchLabels: app: velero component: server # NamespaceMapping is a map of source namespace names to # target namespace names to restore into. Any source namespaces not # included in the map will be restored into namespaces of the same name. namespaceMapping: namespace-backup-from: namespace-to-restore-to # RestorePVs specifies whether to restore all included PVs # from snapshot (via the cloudprovider). restorePVs: true # ScheduleName is the unique name of the Velero schedule # to restore from. If specified, and BackupName is empty, Velero will # restore from the most recent successful backup created from this schedule. scheduleName: my-scheduled-backup-name # Actions to perform during or post restore. The only hooks currently supported are # adding an init container to a pod before it can be restored and executing a command in a # restored pod's container. Optional. hooks: # Array of hooks that are applicable to specific resources. Optional. resources: # Name is the name of this hook. - name: restore-hook-1 # Array of namespaces to which this hook applies. If unspecified, the hook applies to all # namespaces. Optional. includedNamespaces: - ns1 # Array of namespaces to which this hook does not apply. Optional. excludedNamespaces: - ns3 # Array of resources to which this hook applies. The only resource supported at this time is # pods. includedResources: - pods # Array of resources to which this hook does not apply. Optional. excludedResources: [] # This hook only applies to objects matching this label selector. Optional. labelSelector: matchLabels: app: velero app: velero component: server # An array of hooks to run during or after restores. Currently only "init" and "exec" hooks # are supported. postHooks: # The type of the hook. This must be "init" or "exec". - init: # An array of container specs to be added as init containers to pods to which this hook applies to. initContainers: - name: restore-hook-init1 image: alpine:latest # Mounting volumes from the podSpec to which this hooks applies to. volumeMounts: - mountPath: /restores/pvc1-vm # Volume name from the podSpec name: pvc1-vm command: - /bin/ash - -c - echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz - name: restore-hook-init2 image: alpine:latest # Mounting volumes from the podSpec to which this hooks applies to. volumeMounts: - mountPath: /restores/pvc2-vm # Volume name from the podSpec name: pvc2-vm command: - /bin/ash - -c - echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed - exec: # The container name where the hook will be executed. Defaults to the first container. # Optional. container: foo # The command that will be executed in the container. Required. command: - /bin/bash - -c - "psql < /backup/" # How long to wait for a container to become ready. This should be long enough for the # container to start plus any preceding hooks in the same container to complete. The wait # timeout begins when the container is restored and may require time for the image to pull # and volumes to mount. If not set the restore will wait indefinitely. Optional. waitTimeout: 5m # How long to wait once execution begins. Defaults to 30 seconds. Optional. execTimeout: 1m # How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to # `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode, # no more restore hooks will be executed in any container in any pod and the status of the # Restore will be `PartiallyFailed`. Optional. onError: Continue# RestoreStatus captures the current status of a Velero restore. Users should not set any data : # The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed. phase: "" # An array of any validation errors encountered. validationErrors: null # Number of warnings that were logged by the restore. warnings: 2 # Errors is a count of all error messages that were generated # during execution of the restore. The actual errors are stored in object # storage. errors: 0 # FailureReason is an error that caused the entire restore # to fail. # to fail. failureReason:4.3 Schedule 通过此API可以通过给定的符合cron规则的符号,创建可重复执⾏的备份任务。创建后,Velero服务器将开始备份过程;然后它将等待给定cron表达式的下⼀个有效点,并重复执⾏备份过程。Schedule属于的API组和版本为/v1。 以下为定义⼀个Schedule备份任务对象的⽰例,包括了所有可能的字段:# Standard Kubernetes API Version declaration. sion: /v1# Standard Kubernetes Kind declaration. : Schedule# Standard Kubernetes metadata. ta: # Schedule name. May be any valid Kubernetes object name. Required. name: a # Schedule namespace. Must be the namespace of the Velero server. Required. namespace: velero# Parameters about the scheduled backup. : # Schedule is a Cron expression defining when to run the Backup schedule: 0 7 * * * # Template is the spec that should be used for each backup triggered by this schedule. template: # Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included. # Optional. includedNamespaces: - '*' # Array of namespaces to exclude from the scheduled backup. Optional. excludedNamespaces: - some-namespace # Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods') # or fully-qualified. If unspecified, all resources are included. Optional. includedResources: - '*' # Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods') # or fully-qualified. Optional. excludedResources: - # Whether or not to include cluster-scoped resources. Valid values are true, false, and # null/unset. If true, all cluster-scoped resources are included (subject to included/excluded # resources and the label selector). If false, no cluster-scoped resources are included. If unset, # all cluster-scoped resources are included if and only if all namespaces are included and there are # no excluded namespaces. Otherwise, if there is at least one namespace specified in either # includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed # up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a # PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is # cluster-scoped) would also be backed up. includeClusterResources: null # Individual objects must match this label selector to be included in the scheduled backup. Optional. labelSelector: matchLabels: app: velero component: server # Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and # AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as # a persistent volume provider is configured for Velero. snapshotVolumes: null # Where to store the tarball and logs. storageLocation: aws-primary # The list of locations in which to store volume snapshots created for backups under this schedule. volumeSnapshotLocations: - aws-primary - gcp-primary - gcp-primary # The amount of time before backups created on this schedule are eligible for garbage collection. If not specified, # a default value of 30 days will be used. The default can be configured on the velero server # by passing the flag --default-backup-ttl. ttl: 24h0m0s # Actions to perform at different times during a backup. The only hook supported is # executing a command in a container in a pod using the pod exec API. Optional. hooks: # Array of hooks that are applicable to specific resources. Optional. resources: - # Name of the hook. Will be displayed in backup log. name: my-hook # Array of namespaces to which this hook applies. If unspecified, the hook applies to all # namespaces. Optional. includedNamespaces: - '*' # Array of namespaces to which this hook does not apply. Optional. excludedNamespaces: - some-namespace # Array of resources to which this hook applies. The only resource supported at this time is # pods. includedResources: - pods # Array of resources to which this hook does not apply. Optional. excludedResources: [] # This hook only applies to objects matching this label selector. Optional. labelSelector: matchLabels: app: velero component: server # An array of hooks to run before executing custom actions. Only "exec" hooks are supported. pre: - # The type of hook. This must be "exec". exec: # The name of the container where the command will be executed. If unspecified, the # first container in the pod will be used. Optional. container: my-container # The command to execute, specified as an array. Required. command: - /bin/uname - -a # How to handle an error executing the command. Valid values are Fail and Continue. # Defaults to Fail. Optional. onError: Fail # How long to wait for the command to finish executing. Defaults to 30 seconds. Optional. timeout: 10s # An array of hooks to run after all custom actions and additional items have been # processed. Only "exec" hooks are supported. post: # Same content as pre : # The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed. phase: "" # Date/time of the last backup for a given schedule lastBackup: # An array of any validation errors encountered. validationErrors:4.4 BackupStorageLocation Velero可以将备份存储在多个位置,可以通过集群中的⼀个叫做BackupStorageLocation的CRD资源来进⾏声明;Velero必须⾄少有⼀个BackupStorageLocation。默认情况下会在velero命名空间下创建⼀个名为default的实例,⽤来声明备份存储位置,可以通过指定--default-backup-storage-location参数来更改服务端的默认备份存储位置。 以下是创建BackupStorageLocation的亿的简单⽰例:apiVersion: /v1kind: BackupStorageLocationmetadata: name: default namespace: velerospec: backupSyncPeriod: 2m0s provider: aws objectStorage: bucket: myBucket config: region: us-west-2 profile: "default"以下为可配置的参数列表;keyproviderobjectStorageobjectStorage/bucketobjectStorage/prefixobjectStorage/caCertconfigaccessModebackupSyncPeriodtypeStringObjectStorageLocationStringStringStringmap[string]onDefault必填必填必填选填选填⽆(选填)ReadWrite选填Meaning对象存储提供商驱动名称。请参阅 以获取适当的给定提供者的对象存储的规范。备份⽂件要上传的存储桶备份⽂件要上传到存储桶的⽬录。验证TLS连接时要使⽤的base64编码的CA证书对象存储供应商所需的特殊的key/value键值对。有关详细信息,请参见 。Velero访问备份存储位置的模式。有效值为ReadWrite,ReadOnly。Velero应多久同步⼀次对象存储中的备份。默认值为Velero的服务器备份同步时间;设置0s为禁⽤同步。Velero多久验证⼀次对象存储。默认值为Velero的服务器验证频率;设置0s为禁⽤验证。默认值1分钟。on选填4.5 VolumeSnapshotLocationVolumeSnapshotLocation是⽤于定义存储为备份创建的卷快照的位置,可以将Velero配置为对来⾃多个提供程序的卷进⾏快照。Velero还允许您为每个提供商配置对应的VolumeSnapshotLocation,但是在备份时每个提供商只能选择⼀个位置。每个VolumeSnapshotLocation是通过在集群中的CRD资源描述的关于提供程序和存储位置。每个云提供商必须⾄少有⼀个。以下为创建在VolumeSnapshotLocation的⽰例:apiVersion: /v1kind: VolumeSnapshotLocationmetadata: name: aws-default namespace: velerospec: provider: aws config: region: us-west-2 profile: "default"可配置参数如下:KeyproviderTypeStringDefaultRequired FieldMeaning将使⽤那种存储提供者的名称来创建存储卷快照。请参阅 以获取适当的值。configKeymap string stringTypeNone (Optional)Default卷提供程序创建快照所需要传递的配置项。有关详细信息,请参见 。Meaninging | 必填 | 备份⽂件要上传的存储桶 ||

objectStorage/prefix | String | 选填 | 备份⽂件要上传到存储桶的⽬录。 ||

objectStorage/caCert | String | 选填 | 验证TLS连接时要使⽤的base64编码的CA证书 ||

config | map[string]string | ⽆(选填) | 对象存储供应商所需的特殊的key/value键值对。有关详细信息,请参见 。 ||

accessMode | String |

ReadWrite | Velero访问备份存储位置的模式。有效值为ReadWrite,ReadOnly。 ||

backupSyncPeriod | on | 选填 | Velero应多久同步⼀次对象存储中的备份。默认值为Velero的服务器备份同步时间;设置0s为禁⽤同步。 ||

validationFrequency | on | 选填 | Velero多久验证⼀次对象存储。默认值为Velero的服务器验证频率;设置0s为禁⽤验证。默认值1分钟。 |4.5 VolumeSnapshotLocationVolumeSnapshotLocation是⽤于定义存储为备份创建的卷快照的位置,可以将Velero配置为对来⾃多个提供程序的卷进⾏快照。Velero还允许您为每个提供商配置对应的VolumeSnapshotLocation,但是在备份时每个提供商只能选择⼀个位置。每个VolumeSnapshotLocation是通过在集群中的CRD资源描述的关于提供程序和存储位置。每个云提供商必须⾄少有⼀个。以下为创建在VolumeSnapshotLocation的⽰例:apiVersion: /v1kind: VolumeSnapshotLocationmetadata: name: aws-default namespace: velerospec: provider: aws config: region: us-west-2 profile: "default"可配置参数如下:KeyproviderconfigTypeStringmap string stringDefaultRequired FieldNone (Optional)Meaning将使⽤那种存储提供者的名称来创建存储卷快照。请参阅 以获取适当的值。卷提供程序创建快照所需要传递的配置项。有关详细信息,请参见 。[注]:本⽂翻译⾃官⽅⽂档,如有⼜看不懂的点可以添加评论,或者去官⽹查看原⽂

发布者:admin,转转请注明出处:http://www.yc00.com/news/1688051740a71222.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信