One Blueprint, Three OS Families: Unifying Windows, Linux, and Oracle VM Provisioning
A large enterprise customer transformed VM provisioning from six separate vRealize Orchestrator workflow branches into a single unified Cloud Assembly blueprint. Eight major versions deep, from v5.5.1 to v8.8.3, this is the story of building a production-ready unified VM deployment blueprint for VMware Aria Automation — twenty-six inputs, dynamic NIC dropdowns, and the lessons learned along the way.
The Challenge
A Fortune 500 federal services integrator’s CIO and Hosting Services team had a VM provisioning landscape fragmented across operating system and datacenter boundaries. Six separate vRO workflow branches handled the matrix: Windows, Linux, and Oracle, each replicated across two datacenters (TX and VA).
Each workflow was functional in isolation, but the fragmentation created operational friction that grew worse as the environment scaled. Six separate codebases meant six separate places to fix the same bug. When a network configuration approach changed, six workflows needed updating. When a security baseline shifted, multiple teams independently implemented the same change — often inconsistently. End users requesting VMs faced different forms with different fields, different validation rules, and different terminology for the same concepts.
The legacy workflows were also full of accumulated workarounds. Domain joins were happening twice in some paths because nobody had ever fixed the original failure. There were five-to-eight-minute sleep timers padding around guest operations calls. Credentials were sprinkled throughout JavaScript actions. The “network catalog” users picked from in ServiceNow was a static list maintained by hand — meaning new network segments couldn’t be deployed to until somebody manually updated the catalog.
The customer’s ask was simple to state and hard to deliver: “Give us a self-service Service Broker form that does what our ServiceNow portal does today, but better — and that our team can actually maintain.”
The Journey: Eight Versions of Iteration
Why a Single Blueprint?
The team conducted interviews with each OS team to catalog their actual technical requirements separate from their historical workflow patterns. The analysis revealed that approximately 80% of provisioning requirements were identical across OS families. All needed vSphere cluster and datastore selection, network configuration with multiple NICs, CPU and memory sizing, disk configuration, Cloud Zone selection, and Service Broker catalog integration.
The remaining 20% required intelligent conditional handling. Windows needed Active Directory domain join, KMS activation, RDP enablement, and timezone configuration. Linux needed SSH key injection, cloud-init customization, and sudo configuration. Oracle Linux specifically needed database installation triggers and listener configuration.
The conclusion was clear: a single Cloud Assembly blueprint could handle all three OS families with conditional logic branching cleanly from a primary OS selection input.
The NSX Federation Network Resource Discovery
The first major version-bump-worthy discovery was that NSX overlay segments behave fundamentally differently than VLAN-backed networks in Aria Automation blueprints.
What works for VLAN-backed segments: Cloud.vSphere.Network with tag-based constraints. The blueprint declares constraints like tag:datacenter:tx and tag:env:prod, and Aria’s resource broker matches networks meeting all constraints. Clean, declarative, works.
What does not work for NSX overlay segments in federated VCF environments: The same pattern. After exhaustive testing across multiple blueprint iterations, the team discovered that tag constraints simply cannot resolve NSX overlay segments correctly in federated VCF environments. The constraint engine would either fail to find any match, or worse, match the wrong network silently.
The fix: Cloud.NSX.Network with name-based selection. NSX overlay segments are referenced by name rather than by tag constraint. The blueprint became hybrid: VLAN-backed segments declared with Cloud.vSphere.Network and tags, NSX overlay segments declared with Cloud.NSX.Network and names.
This wasn’t documented anywhere. It was discovered through trial and error across several blueprint versions, and the pattern survives in the published reference implementation as a load-bearing decision.
The Conditional Second-NIC Pattern
Multi-NIC support presented its own challenge. Most VMs need only one NIC, but some require two or more. The blueprint had to handle the optional case without breaking when users selected only one network.
The naïve approach: Declare a second NIC and let users skip the input. Doesn’t work — Aria’s validator complains about the empty network reference.
The fix: The count: 0 pattern. The blueprint declares the second NIC resource with a count expression: when the user has selected a second network, count is 1; when they haven’t, count is 0. Aria respects the zero count and skips the resource entirely, eliminating the validation error.
resources:
vm_nic_2:
type: Cloud.vSphere.Network
properties:
count: '${input.network2 != "" ? 1 : 0}'
networkType: existing
name: '${input.network2}'
The Empty-String-Default Pattern
A subtler bug emerged during user testing: users were occasionally clicking through the form without selecting a network, accepting the default, and deploying VMs that ended up on whatever network happened to be alphabetically first in the dropdown. This was rarely the intended network, and the resulting troubleshooting was painful.
The fix: Empty-string-default on the network dropdown so users cannot deploy by accident. The default value is "" (empty string), which fails validation if the user submits without selecting a real network. The form forces an explicit choice.
This pattern looks ugly in the YAML but is one of the highest-value safety mechanisms in the blueprint.
The Dynamic Network Dropdown
The customer’s biggest pain point with the legacy ServiceNow portal was the static network catalog. Adding a new network segment required updating the catalog manually, with the resulting drift between what existed in NSX and what users could pick from.
The solution: Service Broker custom form with dynamic $dynamicEnum integration. The network dropdown populates from a live vRO action call (getNetworkSegmentsAll, see the vRO Actions story) that queries NSX in real-time. Adding a new segment in the back-end means it appears in the catalog immediately — no blueprint redeploy required, no catalog refresh, no manual maintenance.
The form expression looks like:
network1:
type: string
title: Primary Network
$dynamicEnum: /data/vra-network-segments?datacenter=${input.datacenter}
default: ""
The dynamic enum filters networks by selected datacenter, so users only see networks relevant to their deployment target.
Post-Provision Data Disk Attach
Aria Automation’s storage validator created a problem: VMs requesting large amounts of additional storage failed validation if any single Cloud Zone datastore lacked sufficient space, even when storage policies should have distributed the load. The validator’s check was overly conservative, blocking deployments that would have actually succeeded.
The pattern: Provision VMs with minimal initial storage that always passes validation. Use a vRO subscription firing on compute.provision.post to attach additional data disks after the VM successfully deploys. The validator never sees the data disks; the disks attach correctly because they’re not being placed during the validated allocation phase.
The vRO action addDataDisksOnDeploy (see the vRO Actions story) handles the attachment. The blueprint declares the desired data disks as inputs; the subscription reads them from the deployment metadata and attaches them to the running VM.
OS-Specific Configuration
Despite unification, each OS family required unique handling through conditional blocks within the blueprint.
Windows: Customization specs handle hostname configuration, domain join with credential injection, OU placement, timezone, KMS activation triggering, and Windows Update initial state. The customization spec receives parameters from blueprint inputs.
Linux (RHEL & Oracle Linux): Cloud-init payloads embedded in the blueprint handle user creation with SSH key injection, package installation, sudo configuration, network configuration, and timezone setting.
Oracle Database: When Oracle Linux is selected with database installation enabled, the blueprint triggers post-deploy Ansible playbooks for Oracle Database installation, listener configuration, ASM disk configuration, and database creation.
The conditional logic activates these branches only when relevant, keeping basic provisioning fast.
The Results
The unified blueprint replaced six separate workflows with a single source of truth.
Quantifiable Outcomes
Workflow consolidation: Six separate vRO workflow branches (TX/VA × Windows/Linux/Oracle) collapsed into one unified Cloud Assembly blueprint.
Zero blueprint redeploys for catalog updates: The dynamic network dropdown means new segments appear in Service Broker the moment they’re tagged in the back-end. No more manually updating a static catalog.
Reduced sleep timers: Five-to-eight-minute blind sleep timers from the legacy domain-join workflow eliminated through Cloud.Ansible’s native retry handling.
Self-service for application teams: End users provision VMs through Service Broker without team boundaries, without phoning the platform team, without waiting for tickets.
Consistent experience across OS families: The same form, the same validation rules, the same naming conventions for Windows, Linux, and Oracle.
Technical Capabilities Delivered
Core Provisioning:
– Single unified blueprint serving Windows, Linux, and Oracle
– Twenty-six dynamic inputs with comprehensive validation
– Multi-NIC support with the count: 0 conditional pattern
– Empty-string-default network dropdown forcing explicit selection
– Hybrid network resource declaration (Cloud.vSphere.Network for VLAN, Cloud.NSX.Network for overlay)
– Dynamic Service Broker dropdown via $dynamicEnum and live vRO action calls
– Post-provision disk attach via vRO subscription
OS-Specific Features:
– Windows: Domain join, OU placement, timezone, KMS activation
– Linux: SSH key injection, cloud-init, sudo configuration
– Oracle: Database installation triggers, listener configuration
Federation Support:
– Multi-datacenter deployment with dynamic dropdown filtering
– NSX overlay segment handling via name-based selection
– VLAN-backed segment handling via tag-based constraints
– Compatible with NSX Global Manager + Local Manager topology
Lessons Learned
What Worked Well
OS selection as primary driver. A single operatingSystem input driving all conditional logic kept the blueprint readable. Complex nested conditions checking multiple inputs would have been much harder to maintain.
The hybrid network resource pattern. Declaring VLAN-backed and NSX overlay networks differently looked awkward in the YAML, but it’s the only pattern that actually works in federated VCF environments. The team tried tag-based constraints for NSX overlay across multiple versions before accepting that it doesn’t work and committing to name-based selection.
The empty-string-default safety mechanism. Forcing users to explicitly choose a network rather than accepting a default eliminated an entire class of misdeployment errors. Users who deployed to the wrong network used to file support tickets; now they can’t make that mistake.
Dynamic enum from a live vRO action. Off-the-shelf static catalogs drift. Live data sources don’t. The dropdown is always current because it queries NSX in real-time.
Post-provision pattern for storage. Working around the validator by attaching disks post-provision rather than declaring them upfront eliminated a class of false-positive deployment failures. The pattern is reusable for other post-deploy tasks.
What We’d Do Differently
Earlier validation testing. Several discoveries (the NSX overlay tag constraint failure, the storage validator’s behavior, the empty-string-default need) emerged from production user testing rather than blueprint validation. More aggressive synthetic testing of edge cases earlier would have shortened the iteration cycle.
Document the hybrid network pattern clearly. Future maintainers may try to “clean up” the apparent inconsistency between Cloud.vSphere.Network and Cloud.NSX.Network usage. The README needs to make clear that the inconsistency is load-bearing.
Better resource sizing guidance in the form. Users tended to over-provision when given freedom. Adding sizing recommendations based on workload type — web server, database, application server — would have reduced waste.
The Community Impact
The complete unified blueprint has been published as open-source software for the broader VMware community.
Repository: github.com/noahfarshad/aria-vm-blueprint
The repository includes the complete Cloud Assembly YAML blueprint (v8.8.3), the Service Broker custom form definition (v1.0), configuration examples for Windows, Linux, and Oracle deployments, integration guides for IPAM, vRO, and Ansible post-deploy, deployment documentation, a troubleshooting runbook, and detailed explanations of the load-bearing patterns (hybrid network resources, count: 0, empty-string-default, dynamic enum).
The pattern is broadly applicable for any organization running multi-OS provisioning workflows in VMware Aria Automation. Whether you have two OS families or six, the conditional logic approach scales by adding enum values and conditional blocks rather than creating new blueprints.
Getting Started
The blueprint requires VMware Aria Automation 8.x or later, Service Broker configured with custom forms enabled, Cloud Zones with image mappings for Windows and Linux templates, network profiles with BlueCat IPAM integration recommended, vRO with the companion aria-vro-actions package installed for dynamic catalog support, and NSX Federation if using overlay segments across datacenters.
Clone the repository, import the YAML blueprint into Cloud Assembly, configure image mappings for your vSphere templates, import the Service Broker custom form, install the companion vRO actions package, configure the dynamic enum data source, and test with a non-production VM deployment before opening to broader users.
Conclusion
Unifying six separate provisioning workflows into a single Cloud Assembly blueprint required navigating undocumented platform behavior, working around validator limitations, and building safety mechanisms against user error. The result was one blueprint that handles real enterprise complexity: three OS families, two datacenters, hybrid VLAN and NSX overlay networking, dynamic catalog integration, and post-provision automation hooks.
For organizations consolidating VM provisioning workflows in VMware Aria Automation, the patterns demonstrated here provide a production-proven approach. The hybrid network resource pattern handles federated NSX. The count: 0 pattern enables conditional resources. The empty-string-default forces explicit user choice. The dynamic enum keeps catalogs current.
The complete blueprint, custom form, supporting documentation, and load-bearing pattern explanations are available on GitHub for organizations facing similar consolidation challenges.
Repository: github.com/noahfarshad/aria-vm-blueprint
Related Stories:
- Production-Ready BlueCat IPAM Integration — IPAM provider that allocates this blueprint’s networks
- Infrastructure-as-Code for Aria Automation — Network profile management this blueprint depends on
- vRO Actions for Dynamic Catalog Behavior — Dynamic catalog and disk attachment used here
- Idempotent Windows Post-Deploy — Idempotent post-deploy configuration triggered by this blueprint
