VCP-CMA 2020 (2V0-31.20)

| 19 minutes

The sections and objectives in this guide have been extracted from the official VMware study guide:

As part of my standard study methodology, I’ve added to each section with summaries and links referencing official VMware material.

By no means is this a complete study guide. I recommmend spending as much time as you can in vRealize Automation either in a homelab or in VMware’s Hands on Labs.

Section 1 – Architectures and Technologies

Objective 1.1 - Describe the Architecture of vRealize Automation

Objective 1.2 - Differentiate between vRealize Automation and vRealize Automation Cloud

  • Cloud
    • Product updates released every month.
  • On-Prem
    • Allows integration with on-prem vROPs and vIDM

Objective 1.3 – Describe the Services Offered by vRealize Automation Cloud Assembly

vRealize Automation Cloud Assembly

Code Stream

vRealize Automation Code Stream™ is continuous-integration and continuous-delivery (CICD) software that enables you to deliver software rapidly and reliably, with little overhead. vRealize Automation Code Stream supports deploying monolithic legacy applications, as well as Docker and Kubernetes containers running on multiple clouds.


Service Broker

The vRealize Automation Service Broker provides a single point where you can request and manage catalog items.

As a cloud administrator, you create catalog items by importing released vRealize Automation Cloud Assembly blueprints and Amazon Web Services CloudFormation templates that your users can deploy to your cloud vendor regions or datastores.

As a user, you can request and monitor the provisioning process. After deployment, you manage the deployed catalog items throughout the deployment lifecycle.



vRealize Automation includes a preconfigured embedded vRealize Orchestrator instance.

vRealize Orchestrator is a development- and process-automation platform that provides an extensive library of workflows and a workflow engine. Workflows achieve step-by-step process automation for greater flexibility in automated server provisioning and operational tasks across VMware and third-party applications. By using the workflow editor, the built-in Mozilla Rhino JavaScript scripting engine, and the vRealize Orchestrator and vCenter Server APIs, you can design custom workflows.


Section 2 – VMware Products and Solutions

  • There are no testable objectives for this section

Section 3 - Planning and Designing

  • There are no testable objectives for this section

Section 4 – Installing, Configuring, and Setup

Objective 4.1 - Describe the Different Types of vRealize Automation deployments

Simple Deployment - 1 x vRSLCM - 1 x vIDM - 1 x vRA

No load balancer required. Not recommended for production use. Pilot, POC, and development at most.

HA Deployment - Identity Manager Appliance Load Balanced VIP - vRealize Automation Appliance Load Balanced VIP - vRealize Lifecycle Manager Appliance - vRealize Identity Manager Appliance x3 - vRealize Automation Appliance x3

HA deployment is for Production. It can suffer one node outage and still provide services.

Objective 4.2 - Prepare the Pre-requisites for an Installation (DNS, NTP, Service Accounts etc.)

Each vRealize Automation node requires a network setup.

The network requirements for vRealize Automation are:

  • Single, static IPv4 and Network Address
  • Reachable DNS server set manually
  • Valid, Fully-qualified domain name, set manually that can be resolved both forward and in reverse through the DNS server

Note: IP address change or hostname change after installation is not supported and results in a broken setup that is not recoverable.

Objective 4.3 - Perform a Standard Deployment using vRealize Easy Installer

Objective 4.4 - Configure vRealize Automation using Quickstart

Objective 4.5 - Perform Manual Installation using Lifecycle Manager

Objective 4.6 - Configure Identity Sources

Objective 4.7 - Configure Identity and Access Management

Objective 4.8 - Set up Cloud Accounts

Cloud Accounts are simply 1:1 credential connections to resources (AWS, vSphere). You do not configure compute options at the cloud account level.

Objective 4.9 - Add Cloud Zones

A vRealize Automation Cloud Assembly cloud zone is a set of resources within a cloud account type such as AWS or vSphere.

Cloud zones in a specific account region are where your blueprints deploy workloads. Each cloud zone is associated with a vRealize Automation Cloud Assembly project.

Cloud zones are specific to a region, you must assign them to a project. There is a many to many relationship between cloud zones and projects.

Objective 4.10 - Add Projects

Projects control who has access to vRealize Automation Cloud Assembly blueprints and where the blueprints are deployed. You use projects to organize and govern what your users can do and to what cloud zones they can deploy blueprints in your cloud infrastructure.

You create a project to which you add members and cloud zones so that the project members can deploy their blueprints to the associated zones. As the vRealize Automation Cloud Assembly administrator, you create a project for a development team. You can then assign a project administrator or you can operate as the project administrator.

When you create a blueprint, you first select the project to associate it with. The project must exist before you can create the blueprint.

Objective 4.11 - Add Image Mappings

A vRealize Automation Cloud Assembly image map is where you use natural language to define target deployment operating systems for a specific cloud account/region.

An image mapping groups a set of predefined target OS specifications for a specific cloud account/region in vRealize Automation Cloud Assembly by using natural language naming.

Cloud vendor accounts such as Microsoft Azure and Amazon Web Services use images to group a set of target deployment conditions together, including OS and related configuration settings. vCenter and NSX-based environments, including VMware Cloud on AWS, use a similar grouping mechanism to define a set of OS deployment conditions. When you build and eventually deploy and iterate a blueprint, you pick an available image that best fits your needs.

Objective 4.12 - Add Flavor Mappings

A vRealize Automation Cloud Assembly flavor map is where you use natural language to define target deployment sizes for a specific cloud account/region.

Flavor maps express the deployment sizes that make sense for your environment. One example might be small for 1 CPU and 2 GB memory and large for 2 CPUs and 8 GB memory for a vCenter account in a named data center and t2.nano for an Amazon Web Services account in a named region.

Objective 4.13 - Add Network Profiles

A vRealize Automation Cloud Assembly network profile describes the behavior of the network to be deployed.

For example, a network might need to be Internet facing versus internal only. Networks and their profiles are cloud-specific.

A network profile defines a group of networks and network settings that are available for a cloud account in a particular region or data center in vRealize Automation.

You typically define network profiles to support a target deployment environment, for example a small test environment where an existing network has outbound access only or a large load-balanced production environment that needs a set of security policies. Think of a network profile as a collection of workload-specific network characteristics.

Networks, also referred to as subnets, are logical subdivisions of an IP network. A network groups a cloud account, IP address or range, and network tags to control how and where to provision a blueprint deployment. Network parameters in the profile define how machines in the deployment can communicate with one another over IP layer 3. Networks can have tags.

Objective 4.14 - Add Storage Profiles

A cloud administrator can work with storage resources and their capabilities, which are discovered through vRealize Automation Cloud Assembly data collection from associated cloud accounts.

Storage resource capabilities are exposed through tags that typically originate at the source cloud account. A cloud administrator can choose to apply additional tags directly to storage resources though, using vRealize Automation Cloud Assembly. The additional tags might label a specific capability for matching purposes at provisioning time.

A cloud account region contains storage profiles that let the cloud administrator define storage for the region.

Storage profiles include disk customizations, and a means to identify the type of storage by capability tags. Tags are then matched against provisioning service request constraints to create the desired storage at deployment time. Storage profiles are organized under cloud-specific regions. One cloud account might have multiple regions, with multiple storage profiles under each.

Vendor-independent placement is possible. For example, visualize three different vendor accounts and a region in each. Each region includes a storage profile that is capability tagged as fast. At provisioning time, a request containing a hard fast constraint tag looks for a matching fast capability, regardless of which vendor cloud is supplying the resources. A match then applies the settings from the associated storage profile during creation of the deployed storage item.

Capability tags that you add to storage profiles should not identify actual resource targets. Instead, they describe types of storage.

Objective 4.15 - Describe the Different Out of the Box Integrations Available with vRealize Automation

  • GitHub
  • GitLab
  • External IPAM
  • MyVMware (marketplace and images/blueprints)
  • External vRO appliances
  • Kubernetes
    • PKS
    • Roll-your-own k8s
  • Puppet Enterprise
  • Ansible Opensource
  • Ansible Tower
  • Active Directory
  • On-prem vROPs

Objective 4.16 - Integrate vRealize Automation with vRealize Operations

vRealize Automation can work with vRealize Operations Manager to perform advanced workload placement, provide deployment health and virtual machine metrics, and display pricing.

Integration between the two products must be on-premises to on-premises, not a mix of on-premises and cloud.

URL, username and password required. vRA and vROPs need to be connected to the same endpoint (vCenter etc)

Objective 4.17 - Describe the Onboarding Process

You use a workload onboarding plan to identify machines that have been data-collected from a cloud account type in a target region or data center but that are not yet managed by a vRealize Automation Cloud Assembly project.

When you add a cloud account that contains machines that were deployed outside of vRealize Automation Cloud Assembly, the machines are not managed by Cloud Assembly until you onboard them. Use an onboarding plan to bring unmanaged machines into vRealize Automation Cloud Assembly management. You create a plan, populate it with machines, and then run the plan to import the machines. Using the onboarding plan, you can create a blueprint and can also create one or many deployments.

You can onboard one or many unmanaged machines in a single plan. You can select machines manually or by using a filtering rule. Filtering rules select machines for onboarding based on criteria such as machine name, status, IP address, and tags.

  • You can onboard up to 3,500 unmanaged machines within a single onboarding plan per hour.
  • You can onboard up to 17,000 unmanaged machines concurrently within multiple onboarding plans per hour.

Machines that are available for workload onboarding are listed on the Resources > Machines page relative to a specific cloud account type and region and labeled as Discovered in the Origin column. Only machines that have been data-collected are listed. After you onboard the machines, they appear in the Origin column as Deployed.

The person who runs the workload onboarding plan is automatically assigned as the machine owner.

When your onboarding plan uses a vSphere machine, you must edit the blueprint after the onboarding process is complete. The onboarding process cannot link the source vSphere machine and its machine template, and the resultant blueprint will contain the imageRef: “no image available” entry in the blueprint code. The blueprint cannot be deployed until you specify the correct template name in the imageRef: field. To make it easier to locate and update the blueprint after the onboarding process is complete, use the Blueprint name option on the deployment’s Blueprint Configuration page. Record the auto-generated blueprint name or enter and record a blueprint name of your choice. When onboarding is complete, locate and open the blueprint and replace the “no image available” entry in the imageRef: field with the correct template name.

Process: - Select machines - Decide if selected machines go in one deployment or one deployment per machine - Select the blueprint to use for the imported machine(s) - Run onboard plan

You can also use filtering rules to automatically list machines for onboarding.

Objective 4.18 - Describe Action-Based Extensibility (ABX)

  • Using FaaS, you can create individual actions/functions that execute as part of event subscriptions.
  • Backing FaaS can be built in (OpenFAAS) or public cloud (AWS Lambda, Azure Functions)

The actions are small scripts that perform lightweight tasks or steps. For example, rename a virtual machine or assign an IP address.

Verify that the actions you are adding are associated with a project, and that they are released.

Sharing ABX:

Create ABX:

There are two methods of creating an extensibility action:

  • Writing user-defined code for an extensibility action script.
  • Importing a deployment package as a ZIP package for an extensibility action.

Objective 4.19 – Describe the Different Types of Tags in vRealize Automation

Tags are tags are tags in vRA. The difference with tags is the context they’re used in. For example:

The primary function of tags within vRealize Automation Cloud Assembly is to configure deployments using capabilities and constraints. Capability tags placed on cloud zones, network and storage profiles, and individual infrastructure resources define desired capabilities for deployments. Constraint tags that cloud administrators place on projects enable them to exercise a form of governance over those projects. These constraint tags are added to other constraints expressed in blueprints.

During provisioning, vRealize Automation Cloud Assembly matches these capabilities with constraints, also expressed as tags, in blueprints to define deployment configuration. This tag-based capability and constraint functionality serves as the foundation for deployment configuration in vRealize Automation Cloud Assembly. For instance, you can use tags to make infrastructure available only on PCI resources in a particular region.

External Tags

vRealize Automation Cloud Assembly might also contain external tags. These tags are imported automatically from cloud accounts that you associate with a vRealize Automation Cloud Assembly instance. These tags might be imported from vSphere, AWS, Azure or other external software products. When imported, these tags are available for use in the same manner as user created tags.

Using capability tags: Using constraint tags:

vRealize Automation Cloud Assembly applies standard tags to some deployments to support analysis, monitoring, and grouping of deployed resources. Standard tags are unique within vRealize Automation Cloud Assembly. Unlike other tags, users do not work with them during deployment configuration, and no constraints are applied. These tags are applied automatically during provisioning on AWS, Azure, and vSphere deployments. These tags are stored as system custom properties, and they are added to deployments after provisioning.

How tags are processed

In vRealize Automation Cloud Assembly, tags express capabilities and constraints that determine how and where resources are allocated to provisioned deployments during the provisioning process. vRealize Automation Cloud Assembly uses a specific order and hierarchy in resolving tags to create provisioned deployments. Understanding the basics of this process will help you to implement tags efficiently to create predictable deployments. The following list summarizes the high level operations and sequence of capability and constraint tag processing:

  • Cloud zones are filtered by several criteria, including availability and profiles; tags in profiles for the region the zone belongs to are matched at this point.
  • Zone and compute capability tags are used to filter the remaining cloud zones by hard constraints.
  • Out of the filtered zones, priority is used to select a cloud zone. If there are several cloud zones with the same priority, they are sorted by matching soft constraints, using a combination of the cloud zone and compute capabilities.
  • After a cloud zone is selected, a host is selected by matching a series of filters, including hard & soft constraints as expressed in blueprints.

Objective 4.20 - Configure Capability Tags

In vRealize Automation Cloud Assembly, capability tags enable you to define placement logic for deployment of infrastructure components. They are a more powerful and succinct option to hard coding such placements.

You can create capability tags on compute resources, cloud zones, images and image maps, and networks and network profiles. The pages for creating these resources contain options for creating capability tags. Alternatively, you can use the Manage Tags page in vRealize Automation Cloud Assembly to create capability tags. Capability tags on cloud zones and network profiles affect all resources within those zones or profiles. Capability tags on storage or network components affect only the components on which they are applied.

Typically, capability tags might define things like location for a compute resource, adapter type for a network, or tier level for a storage resource. They can also define environment location or type and any other business considerations. As with your overall tagging strategy, you should organize your capability tags in a logical manner. vRealize Automation Cloud Assembly matches capability tags with constraints from cloud zones and on blueprints at deployment time. So, when creating and using capability tags, you must understand and plan to create appropriate blueprint constraints so that matching will occur as expected.

For example, in the Add Cloud Zones topic in the Wordpress example, you created dev and test tags for the OurCo-AWS-US-East and OurCo AWS-US-West zones. This indicates that the OurCo-AWS-US-East zone is a development environment, and the OurCo-AWS-US_West zone is a test environment. Paired with the appropriate constraint tags, these capability tags enable you to direct deployments to the desired environments.

Objective 4.21 - Configure Multi-Tenancy

Blog: * Fantastic article

Official docs:

Section 5 – Performance-tuning, Optimization, Upgrades

  • There are no testable objectives for this section

Section 6 – Troubleshooting and Repairing

Objective 6.1 - Collect Log Bundles

Change the timeout value for collecting logs from each node. For example, if your environment contains large log files, slow networking, high CPU usage, and so on you might need to set the timeout to greater than the 1000 second default value. vracli log-bundle —collector-timeout $CUSTOM_TIMEOUT_IN_SECONDS

Objective 6.2 - Describe vracli Command Options

[email protected] [ ~ ]# vracli —help
usage: vracli [-h] [-v] [-j]

positional arguments:
    certificate         Manipulate vRA certificates. Try ‘vracli certificate’ to see tutorial.
    cluster             Cluster administration commands
    db                  Database-related commands
    disk-mgr            Disk management commands
    license             List the currently registered license keys
    load-balancer       Returns the current load balancer address
    log-bundle          Create a log bundle in the current directory
    ntp                 Enable/Disable time synchronization.
    org-oauth-apps      Get current availability status of third party apps integration.
    proxy               Configure an internet proxy server. Try ‘vracli proxy’ to see tutorial.
    remote-syslog       Remote Syslog integration commands
    reset               Reset commands
    service             Service management and monitoring commands
    status              Display cluster status
    ceip                VMware Customer Experience Improvement Program (CEIP)
    tenant              Tenant administration commands
    upgrade             Upgrade commands
    version             Get the appliance version
    vidm                vIDM administration commands
    vrli                vRealize LogInsight integration commands
    vro                 Configure and control Orchestrator service

optional arguments:
  -h, —help            show this help message and exit
  -v, —verbose
  -j, —json            print output as json

Objective 6.3 - Describe kubectl Command Options

[email protected] [ ~ ]# kubectl —help
kubectl controls the Kubernetes cluster manager.

 Find more information at:

Basic Commands (Beginner):
  create         Create a resource from a file or from stdin.
  expose         Take a replication controller, service, deployment or pod and
expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  explain        Documentation of resources
  get            Display one or many resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or
by resources and label selector

Deploy Commands:
  rollout        Manage the rollout of a resource
  scale          Set a new size for a Deployment, ReplicaSet, Replication
Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage.
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.
  auth           Inspect authorization

Advanced Commands:
  diff           Diff live version against would-be applied version
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  wait           Experimental: Wait for a specific condition on one or many
  convert        Convert config files between different API versions
  kustomize      Build a kustomization target from a directory or a remote url.

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the specified shell (bash or

Other Commands:
  api-resources  Print the supported API resources on the server
  api-versions   Print the supported API versions on the server, in the form of
  config         Modify kubeconfig files
  plugin         Provides utilities for interacting with plugins.
  version        Print the client and server version information

  kubectl [flags] [options]

Use “kubectl <command> —help” for more information about a given command.
Use “kubectl options” for a list of global command-line options (applies to all

Objective 6.4 - Troubleshoot vRealize Automation Configuration Errors

Objective 6.5 - Troubleshoot Provisioning Errors

Troubleshoot ABX action runs:

What to do if a CAS deployment fails:

Objective 6.6 - Monitor Deployments

Objective 6.7 - Monitor vRealize Orchestrator Workflow Execution

Track workflow runs:

Troubleshooting WF runs:

Section 7 – Administrative and Operational Tasks

Objective 7.1 - Manage the Identity and Access Management Tab

Administering users:

Objective 7.2 - Manage Cloud Accounts

Adding cloud accounts:

Objective 7.3 - Manage Cloud Zones

Objective 7.4 - Manage Projects

Learn more:

Objective 7.5 - Manage Image Mappings

Objective 7.6 - Manage Flavor Mappings

Objective 7.7 - Manage Capability and Constraint Tags

How vRA processes tags:

Objective 7.8 - Manage Storage Profiles

Add storage profiles: Learn more:

Objective 7.9 - Manage Network Profiles

Objective 7.10 - Create and Manage Blueprints

Objective 7.11 - Create and Manage Blueprint Versions

How to version:

Objective 7.12 - Manage Extensibility/Subscription

Objective 7.13 - Deploy Catalog Items

Objective 7.14 - Manage Deployments

Objective 7.15 - Describe Kubernetes Clusters

Objective 7.16 - Customize a Deployment using cloudConfig and cloud-init

Objective 7.17 - Create Service Broker Content Sources

Objective 7.18 - Configure Content Sharing

Objective 7.19 - Create and Manage Custom Forms

Objective 7.20 - Manage Policies

Setting up policies:

Objective 7.21 – Manage Notifications

Share this on:
About Stellios Williams
Technical Account Manager VMware
This is my personal tech related blog for anything private and public cloud - including homelabs! My postings are my own and don’t necessarily represent VMware’s positions, strategies or opinions. Any technical guidance or advice is given without warranty or consideration for your unique issues or circumstances.