Dynamic Ansible Inventory: Bridging Aria Provisioning and Configuration Management
A large enterprise customer needed Ansible to know about every VM and bare-metal host in their environment without manually maintaining inventory files. Two Python inventory generators — one for VMware vSphere, one for HPE OneView across three API versions — bridged the gap between Aria-provisioned VMs, OneView-managed hardware, and Ansible’s playbook execution.
The Challenge
The customer’s Ansible playbooks needed accurate, current inventory. Hand-maintained inventory files weren’t viable: VMs come and go, IPs change, group memberships drift. The legacy approach was a quarterly export from vCenter into a static INI file, which was stale within a week.
Multiple sources of truth. VM workloads lived in vSphere. Bare-metal infrastructure lived in HPE OneView. Both needed to feed into the same Ansible inventory because the same playbooks operated against both. No single tool covered both sources.
Multiple OneView versions. The customer’s OneView estate spanned three different appliance versions: 6.6 (API version 3800), 8.9 (API version 6400), and 10.0 (API version 7600). Each version had different authentication options, different REST API surfaces, and different field naming. A generator that worked against 6.6 would fail against 10.0 with HTTP 400 errors. A generator for 10.0 wouldn’t authenticate against 6.6.
IP resolution complexity. vSphere gives you VMs, but VMs don’t always have a single, obvious IP. A multi-NIC Windows VM might report several IPs through VMware Tools — some of them link-local, some of them DHCP, some of them the actual management IP Ansible should use. The generator had to choose intelligently.
Grouping requirements. Ansible playbooks target groups: [windows], [linux], [oracle_db], [prod], [tx_datacenter]. The generator had to extract group memberships from VM properties — operating system, vSphere folder structure, environment tags, custom attributes — and produce a properly grouped INI file.
Operator workflow. The customer’s operators worked primarily in vRealize Orchestrator workflows. Asking them to leave vRO to run a separate inventory generator wasn’t acceptable. The generators needed to integrate into vRO so operators never left the Aria UI.
The Solution: Two Generators, One Pattern
The team built a Python inventory generator for each source and packaged them as importable vRO workflows. Both follow the same pattern: query the source, transform the data, write INI-format inventory, optionally push to source control.
The vSphere Generator
The vSphere generator queries vCenter via pyVmomi, enumerates VMs across configured datacenters, resolves each VM’s primary IP intelligently, extracts grouping metadata, and writes an INI-format Ansible inventory.
Intelligent IP resolution. The generator prefers VMware Tools-reported IPs over guess-based resolution. When Tools reports multiple IPs, it filters out link-local addresses, IPv6 addresses (configurable), and addresses on management networks unless those are explicitly the target. For VMs without Tools reporting (rare, but they exist), the generator falls back to vSphere’s guest.ipAddress attribute or skips the VM with a warning.
Grouping by multiple dimensions:
- Operating system — Reads the VM’s guest OS identifier from vCenter and maps it to standard groups (
windows,linux,rhel,oracle_linux) - vSphere folder — The folder path becomes a group name, allowing playbooks to target by organizational structure
- Environment — Custom attributes, tags, or naming conventions identify dev/test/prod
- Custom tags — vSphere tags map directly to Ansible group memberships
Powered-off filtering. By default, the generator excludes powered-off VMs (Ansible can’t reach them anyway). A flag enables their inclusion when the playbook needs to target stopped VMs.
Bitbucket push. Optionally, the generator commits the generated inventory to a Bitbucket repository via token-based git authentication. This gives the customer a versioned history of inventory changes — useful for understanding what the inventory looked like at any point in the past, and for audit trails.
The HPE OneView Generator
The OneView generator handles the multi-version complexity through explicit version-aware code paths.
Version detection. On startup, the generator queries the OneView appliance’s /rest/version endpoint to determine which API version it’s talking to. The response routes to the appropriate code path: 6.6 (API 3800), 8.9 (API 6400), or 10.0 (API 7600).
Authentication variants. OneView 10.0 added support for directory-based authentication (Active Directory) alongside local accounts. The generator supports both, with the auth method specified in the configuration. The 6.6 and 8.9 paths use local authentication only.
API surface differences. Server profile fields renamed between versions. iLO information lives at different REST paths. Network connection definitions changed structure. The generator’s data extraction code handles each version’s quirks separately rather than trying to abstract over them — version-specific code is more readable than a single function with conditional logic for every field.
Hard-won across all three versions. The multi-version support emerged from actual deployment against the customer’s real fleet. Each version exposed different edge cases. OneView 8.9’s response to certain queries differed from documentation. OneView 10.0 changed the iLO-info path. The version-specific code captures all of these as discovered patches.
vRO Integration
Each generator ships with two artifacts: the Python source and a vRO workflow.
The vRO workflow wraps the Python execution in a way that’s accessible from the Aria UI. Operators trigger the workflow with a few inputs (which vCenter, which OneView, which environment), and the workflow runs the Python generator with those parameters. Output goes to the configured destination — local file, Bitbucket repository, or both.
The .package file — a signed vRO export — can be imported directly into a fresh vRO instance, bringing the workflow and any required configuration elements together. The customer’s operators import the package once per vRO instance and the workflows are immediately available in the catalog.
Schedule support. The vRO workflows can be scheduled to run periodically — typically nightly — keeping the inventory current without manual intervention. The customer scheduled both generators to run at 2 AM with the output committed to Bitbucket, giving them a daily snapshot of the environment.
The Results
The inventory generators eliminated manual inventory maintenance.
Always-current inventory. Nightly runs produce up-to-date inventory automatically. New VMs from Aria provisioning appear in the next morning’s inventory. Decommissioned VMs disappear. IP changes are reflected. Group memberships update.
Multi-source aggregation. The same Ansible playbook can target VMs from vSphere and bare-metal hosts from OneView using a unified inventory. The customer’s operators don’t have to think about which tool manages which host — they just write playbooks targeting groups.
Multi-version OneView support. All three OneView appliance versions in the customer’s environment (6.6, 8.9, 10.0) are supported by a single generator with the same configuration interface. Replacing an OneView appliance doesn’t require reconfiguring inventory tooling.
Operator-friendly. Inventory generation is just another vRO workflow in the catalog. Operators trigger it the same way they trigger any other automation. No separate tools, no separate credentials, no separate UI.
Versioned history. Bitbucket commits give the customer a daily snapshot of their entire infrastructure inventory. Audit questions (“what was running on the night of X?”) have a definitive answer.
Lessons Learned
Version-specific code beats clever abstraction. The OneView generator could have tried to abstract over the three API versions with a clever common interface and per-version adapters. It would have been less code but more bugs. Explicit version-aware code paths — if version == "10.0": handle_10_specifically() — are easier to read, easier to debug, and easier to extend. The cost is some duplication; the benefit is that each version’s quirks are captured in one place.
VMware Tools is the right IP source. Trying to derive a VM’s primary IP from network configuration, DNS resolution, or DHCP guesswork is error-prone. VMware Tools knows what IPs the guest actually sees and which interfaces they’re on. Always prefer Tools-reported IPs and only fall back to guesswork when Tools isn’t available.
Schedule the generators, don’t run them on demand. Generating inventory on demand sounds appealing — always fresh — but creates a single point of failure. If the generator hits an issue, every Ansible run is blocked. Scheduled generation produces a snapshot file; Ansible reads the snapshot. If the generator fails one night, Ansible still runs against yesterday’s snapshot while the failure gets investigated.
Commit to source control. The Bitbucket integration was added late in the engagement and immediately proved valuable. Audit trail, debugging history, “what changed yesterday” investigations — all of it comes for free once inventory is committed daily. Treat inventory the same way you treat any other infrastructure artifact: version it.
Filter aggressively. Powered-off VMs shouldn’t appear in inventory by default. Test environments shouldn’t appear in production playbook targets. Hosts in known-bad states shouldn’t be reachable through the inventory. Filter at the generator level rather than expecting playbooks to skip-on-error correctly.
What We’d Do Differently
Add a NetBox or similar IPAM integration. The generators query vSphere and OneView directly. A future version could correlate against NetBox or another IPAM source to enrich inventory with metadata not available from the source systems (rack location, support tier, ownership). The customer’s BlueCat integration provides some of this; future work could expose it through the inventory.
Build a dry-run mode earlier. The early generators wrote inventory files immediately. Adding a --dry-run flag that prints what would be written without writing it would have made development faster and given operators a safer way to test configuration changes. Future versions include the flag from day one.
Standardize the vRO workflow structure. The two generators’ vRO workflows evolved independently and have minor structural differences. Future generators should follow a single workflow template so operators only have to learn one pattern.
Getting Started
The vSphere generator requires Python 3.7+, pyVmomi, vCenter Server with read access to all relevant datacenters, and optionally Bitbucket access tokens for repository commits.
The OneView generator requires Python 3.7+, the python-hpOneView library appropriate to your OneView version, network connectivity to OneView’s REST API, and OneView credentials (local account or directory-based depending on version).
Both generators integrate with vRO via signed .package files included in the repository.
Clone the repository, install Python dependencies, configure source endpoint URLs and credentials in the generator’s configuration file, import the appropriate .package file into vRO, and either trigger the workflow manually for testing or schedule it for production use.
Conclusion
Ansible inventory shouldn’t be a maintenance burden. The right pattern is automated generation from authoritative sources — vSphere for VMs, OneView for bare metal, with optional source control commits for audit trails.
The version-aware OneView generator is the more interesting half of this story. Multi-version API support is a real-world problem rarely addressed in publicly available tooling. The patterns documented here generalize to any multi-version vendor API.
The complete generators, vRO workflows, and .package files are available on GitHub.
Repository: github.com/noahfarshad/ansible-inventory-generators
Related Stories:
- Production-Ready BlueCat IPAM Integration — IPAM source for VMs that appear in this inventory
- Infrastructure-as-Code for Aria Automation — Configuration management for the broader pipeline
- One Blueprint, Three OS Families — Blueprint that provisions the VMs this inventory tracks
- vRO Actions for Dynamic Catalog Behavior — Sister vRO integration pattern
- Idempotent Windows Post-Deploy — Playbook this inventory feeds
