vRO Actions for Aria Automation: When the Blueprint Isn’t Enough
A large enterprise customer’s Aria Automation deployment needed three capabilities the platform doesn’t provide natively: dynamic Service Broker dropdowns, intelligent network profile routing, and post-provision disk attachment. Three vRealize Orchestrator JavaScript actions — small, focused, production-tested — closed the gaps.
The Challenge
When the team migrated the customer’s VM provisioning from legacy vRealize Orchestrator workflows to modern Aria Automation Cloud Templates, three platform limitations surfaced that no blueprint configuration could solve.
The static catalog problem. The customer’s legacy ServiceNow portal had a hand-maintained network catalog. Users picked from a list, and someone manually updated that list whenever new networks were added. Aria Automation’s Service Broker can populate dropdowns from blueprint inputs, but the blueprint inputs are themselves static — defined at blueprint version time. Adding a new network segment meant editing the blueprint, redeploying it, and refreshing the catalog. The customer wanted dropdowns that auto-populated from live infrastructure data, with the dropdown updating the moment a new segment was tagged in the back-end.
The wrong-profile problem. Multi-datacenter deployments needed to route to the correct network profile based on the user’s selected segment and datacenter. Aria’s resource constraint engine handles this for VLAN-backed segments via tag matching, but it falls down for NSX overlay segments in federated environments (see the unified VM blueprint story for the full discovery). The customer’s deployments occasionally selected the wrong profile, causing the IPAM provider to fail allocation or the VM to land on the wrong network entirely.
The storage validator problem. Aria Automation’s storage validator runs at deployment-validate time, before the deployment actually starts. The validator checks that requested storage will fit on the target Cloud Zone. For multi-disk Windows builds with several large data disks, the validator was overly conservative — failing deployments that would have succeeded had the disks been allocated post-provision according to actual storage policies. The customer needed a way to bypass the validator while still attaching the required disks.
The Solution: Three Surgical vRO Actions
The team built a single vRO module — com.essential.aria — containing three JavaScript actions, each solving one specific gap.
Action 1: getNetworkSegmentsAll — Dynamic Service Broker Dropdown
The pattern: Service Broker custom forms support $dynamicEnum data sources. The form makes a live call to a vRO action, the action returns a list of values, and the dropdown populates with those values. The form refreshes whenever its filtering inputs change — meaning the dropdown shows the right networks for the right datacenter automatically.
What the action does:
The action receives the user’s currently-selected datacenter (via the form’s contextual input) and queries the back-end for all NSX segments matching that datacenter. It applies prefix-based filtering (the same pattern documented in the network automation toolkit) to exclude infrastructure backbone segments and keep only workload segments. It returns each segment with both its technical identifier and a friendly display name suitable for the dropdown.
Why JavaScript and not the IPAM provider’s range list?
The IPAM provider’s GetIPRanges returns ranges in the Aria Automation IPAM SDK format — useful for IP allocation but not directly suitable for catalog dropdowns. The vRO action returns simpler key-value pairs with display names, exactly what the form expects. Layering responsibilities cleanly: the IPAM provider handles allocation, the vRO action handles catalog presentation.
The freshness guarantee:
The dropdown queries vRO live every time the form opens. Add a network segment in NSX, tag it appropriately via aria_mapping.py --servicenow-tags, and it appears in the catalog the next time someone opens the request form. No blueprint redeploy, no catalog refresh, no maintenance window.
Action 2: getNetworkProfileTag — Intelligent Profile Routing
The pattern: The blueprint declares network resources with profile constraints. The constraints reference tags. The action returns the correct tag based on the user’s runtime selections.
What the action does:
The action receives the selected segment name and datacenter, then determines which network profile that segment belongs to. The customer’s environment had five logical profiles (ESXP TX W01, ESXP VA W01, NSX Overlay TX, NSX Overlay VA, NSX Global Stretched). The action’s logic combines the segment’s prefix (NZ-*, TX-*, VA-*, US-CI-*, G-*) with the datacenter to route to the correct profile. It returns a profile tag string that Aria’s constraint engine consumes.
Why this can’t be in the blueprint:
Blueprints can apply tag-based constraints, but the constraint expressions can’t compute new tag values from runtime inputs. The expression tag:profile:overlay-tx is static. What’s needed is tag:profile:${computed-from-segment-and-datacenter}, and that computation requires logic Aria’s expression syntax doesn’t support. The vRO action provides the missing computation step.
Why this matters operationally:
Before this action, deployments occasionally selected the wrong profile and failed. The IPAM provider would receive an IP allocation request against a network that didn’t belong to the selected profile, and the request would fail with a confusing error. After this action, profile selection became deterministic — the right segment always routes to the right profile, and the IPAM provider always gets a valid allocation request.
Action 3: addDataDisksOnDeploy — Post-Provision Disk Attachment
The pattern: Event subscriptions in Aria Automation listen for deployment lifecycle events. The compute.provision.post event fires after a VM has been successfully provisioned. The action runs in response, performing additional configuration that wasn’t part of the blueprint deployment.
What the action does:
When a deployment completes successfully, the action reads the deployment metadata to determine what additional data disks were requested. The blueprint declares these disks as inputs but doesn’t actually allocate them at deployment time — they’re stored as metadata for post-processing. The action connects to vCenter via the customer’s service account, attaches the requested disks to the VM, waits for VMware Tools to confirm the disks are visible to the guest OS, and signals completion.
Why this avoids the validator:
The storage validator runs against blueprint-declared storage. Disks not declared in the blueprint never face the validator. By moving disk allocation from blueprint-time to post-provision-time, the action sidesteps the validator entirely. The actual storage policy enforcement happens at attach-time in vCenter, which respects per-datastore capacity correctly.
The Tools-confirmation pattern:
A common failure mode was attaching disks that the guest OS didn’t see — typically because the attachment fired before the guest had finished booting. The action polls VMware Tools for disk visibility before signaling completion. If Tools reports the new disks within a reasonable timeout, the action signals success; if not, it signals failure with a clear error message rather than silently completing while leaving the VM in an inconsistent state.
The Results
Three small actions, three substantial capabilities the platform doesn’t provide natively.
Dynamic catalog with zero maintenance. The Service Broker network dropdown stays current automatically. The customer’s network team can add segments in NSX, run the tagging command, and the new segment appears in the catalog within minutes. No blueprint redeploys. No catalog refreshes. No manual coordination.
Deterministic profile routing. Every deployment routes to the correct network profile. The “wrong profile” failure mode is gone. The IPAM provider always receives valid allocation requests because the profile and segment combination is always consistent.
Storage validator bypass. Multi-disk Windows builds deploy reliably regardless of which Cloud Zone they target. The validator’s overly conservative checks no longer block deployments that should succeed. The actual storage placement decisions happen at vCenter attach-time where they belong.
Lessons Learned
Small actions, single responsibilities. Each of the three actions does one thing. They don’t share state, they don’t depend on each other, they don’t grow into general-purpose libraries. When a new gap appears, write a new action — don’t extend an existing one. The customer’s team can understand and maintain three focused actions far more easily than one bloated library.
JavaScript in vRO is fine for this. The actions are small (each well under 200 lines), the logic is straightforward, and vRO’s editor handles them well. There’s no reason to over-engineer with TypeScript or build pipelines. The JavaScript files live in source control alongside the customer’s other vRO content.
Event subscriptions are the right extension point. Modifying the blueprint to do post-provision work pollutes the blueprint with concerns that don’t belong there. Event subscriptions keep the blueprint focused on declarative deployment and the post-provision logic separate. The pattern generalizes — the customer is using the same approach for monitoring agent registration and inventory updates.
Tools-confirmation prevents silent failures. Always wait for VMware Tools to confirm a change took effect before signaling completion. Disk attachments, network changes, customization completions — all of them need a Tools-level confirmation step. Without it, the deployment looks successful while the VM is actually in an inconsistent state.
What We’d Do Differently
Better error messaging from getNetworkProfileTag. When the routing logic can’t determine a profile (an unrecognized prefix, a missing datacenter, an edge case in the segment name), the action returned a generic error. A more diagnostic message identifying why the routing failed would have shortened troubleshooting cycles. Future versions log the exact inputs and decision tree.
Earlier rate-limit handling. getNetworkSegmentsAll queries NSX every time the form opens. In bursts (multiple users opening forms simultaneously), this can exceed NSX’s API rate limits. Adding lightweight caching with a short TTL would have prevented the occasional rate-limit-induced failures during peak request periods.
Getting Started
The actions require VMware vRealize Orchestrator 8.x, Aria Automation 8.x with the companion blueprint and IPAM provider, NSX Manager API access from vRO, and a vCenter service account for the disk attachment action.
Clone the repository, import the com.essential.aria package into vRO, configure the action’s input bindings to point at your NSX and vCenter endpoints, configure event subscriptions in Aria Automation (compute.provision.post triggering addDataDisksOnDeploy), and configure your blueprint’s Service Broker form to call getNetworkSegmentsAll via $dynamicEnum.
Conclusion
Aria Automation handles 90% of provisioning needs natively. The remaining 10% is where vRO actions earn their place — providing the dynamic computation, the event-driven extension, and the validator-bypass patterns that the platform doesn’t offer directly.
Three actions. Three production capabilities. The pattern generalizes to any Aria Automation deployment with similar gaps.
The complete vRO module, action source code, and integration documentation are available on GitHub.
Repository: github.com/noahfarshad/aria-vro-actions
Related Stories:
- Production-Ready BlueCat IPAM Integration — IPAM provider these actions complement
- Infrastructure-as-Code for Aria Automation — Toolkit that tags segments these actions surface
- One Blueprint, Three OS Families — Blueprint that consumes these actions
- Idempotent Windows Post-Deploy — Another extension point in the provisioning pipeline
