Cloud Experts Documentation

Deploying OpenShift API for Data Protection on an ARO cluster

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Prerequisites

Getting Started

  1. Create the following environment variables, substituting appropriate values for your environment:
Copy
export AZR_CLUSTER_NAME=oadp
export AZR_SUBSCRIPTION_ID=$(az account show --query 'id' -o tsv)
export AZR_TENANT_ID=$(az account show --query 'tenantId' -o tsv)
export AZR_RESOURCE_GROUP=oadp
export AZR_STORAGE_ACCOUNT_ID=oadp
export AZR_STORAGE_CONTAINER=oadp
export AZR_STORAGE_ACCOUNT_SP_NAME=oadp
export AZR_IAM_ROLE=oadp
export AZR_STORAGE_ACCOUNT_ACCESS=$(az storage account keys list --account-name $AZR_STORAGE_ACCOUNT_ID --query "[?keyName == 'key1'].value" -o tsv)

Prepare Azure Account

  1. Create an Azure Storage Account as a backup target:
Copy
az storage account create \
  --name $AZR_STORAGE_ACCOUNT_ID \
  --resource-group $AZR_RESOURCE_GROUP \
  --sku Standard_GRS \
  --encryption-services blob \
  --https-only true \
  --kind BlobStorage \
  --access-tier Cool
  1. Create an Azure Blob storage container:
Copy
az storage container create \
  --name $AZR_STORAGE_CONTAINER \
  --public-access off \
  --account-name $AZR_STORAGE_ACCOUNT_ID
  1. Create a role definition that will allow the operator minimal permissions to access the storage account where the backups are stored:
Copy
az role definition create --role-definition '{
   "Name": "'$AZR_IAM_ROLE'",
   "Description": "OADP related permissions to perform backups, restores and deletions",
   "Actions": [
       "Microsoft.Compute/disks/read",
       "Microsoft.Compute/disks/write",
       "Microsoft.Compute/disks/endGetAccess/action",
       "Microsoft.Compute/disks/beginGetAccess/action",
       "Microsoft.Compute/snapshots/read",
       "Microsoft.Compute/snapshots/write",
       "Microsoft.Compute/snapshots/delete",
       "Microsoft.Storage/storageAccounts/listkeys/action",
       "Microsoft.Storage/storageAccounts/regeneratekey/action"
   ],
   "AssignableScopes": ["/subscriptions/'$AZR_SUBSCRIPTION_ID'"]
   }'
  1. Create a service principal for interacting with the Azure API, being sure to take note of the appID and password from the output. In this command, we will store these as AZR_CLIENT_ID and AZR_CLIENT_SECRET and use them in a subsequent command:
Copy
az ad sp create-for-rbac --name $AZR_STORAGE_ACCOUNT_SP_NAME

IMPORTANT be sure to store the client id and client secret for your service principal, as they will be needed later in this walkthrough. You will see the below output from the above command:

Copy
{
  "appId": "xxxxx",
  "displayName": "oadp",
  "password": "xxxx",
  "tenant": "xxxx"
}

Set the following variables:

Copy
export AZR_CLIENT_ID=<VALUE_FROM_appId_ABOVE>
export AZR_CLIENT_SECRET=<VALUE_FROM_password_ABOVE>
  1. Retrieve the object ID for the service principal you just created. This is used to assign permissions for this service principal using the previously created role:
Copy
export AZR_SP_ID=$(az ad sp list --display-name $AZR_STORAGE_ACCOUNT_SP_NAME --query "[?appDisplayName == '$AZR_CLUSTER_NAME'].id" -o tsv)
  1. Assign permissions on the storage account for the service principal using the permissions from the previously created role:
Copy
az role assignment create \
    --role $AZR_IAM_ROLE \
    --assignee-object-id $AZR_SP_ID \
    --scope "/subscriptions/$AZR_SUBSCRIPTION_ID/resourceGroups/$AZR_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$AZR_STORAGE_ACCOUNT_ID"

Deploy OADP on ARO Cluster

  1. Create a namespace for OADP:
Copy
oc create namespace openshift-adp
  1. Deploy OADP Operator:
Copy
cat << EOF | oc create -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  generateName: openshift-adp-
  namespace: openshift-adp
  name: oadp
spec:
  targetNamespaces:
  - openshift-adp
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: redhat-oadp-operator
  namespace: openshift-adp
spec:
  channel: stable-1.2
  installPlanApproval: Automatic
  name: redhat-oadp-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
EOF
  1. Wait for the operator to be ready:
Copy
watch oc -n openshift-adp get pods
Copy
NAME                                                READY   STATUS    RESTARTS   AGE
openshift-adp-controller-manager-546684844f-qqjhn   1/1     Running   0          22s
  1. Create a file containing all of the environment variables needed. These are stored in the cloud key of the secret created in the next step and is required by the operator to locate configuration information:
Copy
cat << EOF > /tmp/credentials-velero
AZURE_SUBSCRIPTION_ID=${AZR_SUBSCRIPTION_ID}
AZURE_TENANT_ID=${AZR_TENANT_ID}
AZURE_RESOURCE_GROUP=${AZR_RESOURCE_GROUP}
AZURE_CLIENT_ID=${AZR_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZR_CLIENT_SECRET}
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZR_STORAGE_ACCOUNT_ACCESS}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF
  1. Create the secret that the operator will use to access the storage account. This is created from the secret file you created in the previous step:
Copy
oc create secret generic cloud-credentials-azure \
  --namespace openshift-adp \
  --from-file cloud=/tmp/credentials-velero

WARNING be sure to delete the file at /tmp/credentials-velero once you are comfortable with the configuration and setup of the operator and have it working to avoid exposing sensitive credentials to anyone who may be sharing the system you are running these commands from.

  1. Deploy a Data Protection Application:
Copy
cat << EOF | oc create -f -
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
  name: $AZR_CLUSTER_NAME
  namespace: openshift-adp
spec:
  configuration:
    velero:
      defaultPlugins:
        - azure
        - openshift 
      resourceTimeout: 10m 
    restic:
      enable: true 
  backupLocations:
    - velero:
        config:
          resourceGroup: $AZR_RESOURCE_GROUP
          storageAccount: $AZR_STORAGE_ACCOUNT_ID 
          subscriptionId: $AZR_SUBSCRIPTION_ID 
        credential:
          key: cloud
          name: cloud-credentials-azure
        provider: azure
        default: true
        objectStorage:
          bucket: $AZR_STORAGE_CONTAINER
          prefix: oadp
  snapshotLocations: 
    - velero:
        config:
          resourceGroup: $AZR_RESOURCE_GROUP
          subscriptionId: $AZR_SUBSCRIPTION_ID 
          incremental: "true"
        name: default
        provider: azure
EOF

Perform a Backup

  1. Create a workload to backup:
Copy
oc create namespace hello-world
oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
  1. Expose the route:
Copy
oc expose service/hello-openshift -n hello-world
  1. Make a request to see if the application is working:
Copy
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`

If the application is working, you should see a response such as:

Copy
Hello OpenShift!
  1. Backup workload:
Copy
cat << EOF | oc create -f -
apiVersion: velero.io/v1
kind: Backup
metadata:
  name: hello-world
  namespace: openshift-adp
spec:
  includedNamespaces:
    - hello-world
  storageLocation: ${AZR_CLUSTER_NAME}-1
  ttl: 720h0m0s
EOF
  1. Wait until backup is done:
Copy
watch "oc -n openshift-adp get backup hello-world -o json | jq .status"

NOTE backup is done when phase is Completed like below:

Copy
{
  "completionTimestamp": "2022-09-07T22:20:44Z",
  "expiration": "2022-10-07T22:20:22Z",
  "formatVersion": "1.1.0",
  "phase": "Completed",
  "progress": {
    "itemsBackedUp": 58,
    "totalItems": 58
  },
  "startTimestamp": "2022-09-07T22:20:22Z",
  "version": 1
}
  1. Delete the demo workload:
Copy
oc delete ns hello-world
  1. Restore from the backup:
Copy
cat << EOF | oc create -f -
apiVersion: velero.io/v1
kind: Restore
metadata:
  name: hello-world
  namespace: openshift-adp
spec:
  backupName: hello-world
EOF
  1. Wait for the restore to finish:
Copy
watch "oc -n openshift-adp get restore hello-world -o json | jq .status"

NOTE restore is done when phase is Completed like below:

Copy
{
  "completionTimestamp": "2022-09-07T22:25:47Z",
  "phase": "Completed",
  "progress": {
    "itemsRestored": 38,
    "totalItems": 38
  },
  "startTimestamp": "2022-09-07T22:25:28Z",
  "warnings": 9
}
  1. Ensure that workload is restored:
Copy
oc -n hello-world get pods

You should see:

Copy
NAME                              READY   STATUS    RESTARTS   AGE
hello-openshift-9f885f7c6-kdjpj   1/1     Running   0          90s
Copy
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`

If the application is working, you should see a response such as:

Copy
Hello OpenShift!

Cleanup

IMPORTANT this is only necessary if you do not need to keep any of your work

Cleanup Cluster Resources

  1. Delete the workload:
Copy
oc delete ns hello-world
  1. Delete the Data Protection Application:
Copy
oc -n openshift-adp delete dpa ${AZR_CLUSTER_NAME}
  1. Remove the operator if it is no longer required:
Copy
oc -n openshift-adp delete subscription oadp-operator
  1. Remove the namespace for the operator:
Copy
oc delete ns openshift-adp
  1. Remove the backup and restore resources from the cluster if they are no longer required:
Copy
oc delete backup hello-world
oc delete restore hello-world

To delete the backup/restore and remote objects in Azure Blob storage:

Copy
velero backup delete hello-world
velero restore delete hello-world
  1. Remove the Custom Resource Definitions from the cluster if you no longer wish to have them:
Copy
for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done
for CRD in `oc get crds | grep -i oadp | awk '{print $1}'`; do oc delete crd $CRD; done

Cleanup Azure Resources

  1. Delete the Azure Storage Account:
Copy
az storage account delete \
  --name $AZR_STORAGE_ACCOUNT_ID \
  --resource-group $AZR_RESOURCE_GROUP \
  --yes
  1. Delete the IAM Role:
Copy
az role definition delete --name $AZR_IAM_ROLE
  1. Delete the Service Principal:
Copy
az ad sp delete --id $AZR_SP_ID

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.