Overview
For many years the VMware Orchestration platform (i.e. vRealize Orchestrator or vRO) has been leveraging JavaScript as its scripting language of choice. Any out of the box content and customer provided custom content has been created in JavaScript leveraging plugins to expose external product capabilities that can be leveraged via JavaScript objects, properties and methods. For example, a function written in PowerShell would need to be executed on an external PowerShell host via a PowerShell vRO plugin. The plugin would provide methods like “invokeScript()”.
Things are now changing so in this article I want to examine the new “Polyglot” functionality and show how it changes things.
What Is It?
Well Google says Polyglot means “a person who knows and is able to use several languages”. So in this case it implies that vRO is able to understand and handle more than just JavaScript. Starting from the 8.x product version vRO (external and embedded in vRA) has the ability to leverage code written in:
- JavaScript
- Python
- Node.js
- PowerShell
This does not mean that all the library workflows and plugins are available in all scripting languages. It does mean that you have the ability to leverage one of the above languages when you write a workflow script element or workflow action.
I’m Going to Write Everything in Python!
Well hang on there, it’s not quite that simple. One of the benefits of vRealize Orchestrator is the rich library of plugins that offer the ability for users to write powerful workflows using pre-packaged content and feature rich plugins. So unless you want to give up all those plugins and write everything from scratch you can’t just switch to another runtime scripting language for everything. What you can do is use your language of choice for specific parts of a workflow where it makes the most sense to do so. However, there are limitations so I’m going to use an example to illustrate.
Lets Have an Example…
I spin up EC2 instances in AWS when I need to do some quick and dirty testing of external products and interactions. I would like to be able to locate and interact with some of those EC2 instances from my vRO platform however the vRO plugin for AWS is a bit limited, particularly when looking beyond EC2. What I would really like to do is leverage the AWS SDK however it’s only available for Python and Node.js. In this instance I want to write my code in Python and have vRO execute it as part of a workflow.
The AWS SDK in Python is called boto3 so any Python code that interacts with AWS will likely need to leverage the boto3 module so that its methods can be called upon. Here is the first potential issue. vRO only comes with the default Python libraries (the same applies for the other scripting languages) so if you want to leverage anything else then the required modules need to be installed.
External runtime modules can only be supplied as part of a ZIP bundle when creating a vRO action. You cannot install anything direct to the appliance and make it globally available. This means that my AWS code needs to be contained within a vRO action (i.e. it cannot be a workflow script element). Well that’s OK because once I have an action that contains my boto3 module then I can use that same module in other vRO actions and script elements right? Well, in a word no. The action is treated as a self-contained space so whatever you include within it cannot be referenced anywhere else.
Lets Code
Lets start off by putting together some basic search functionality in Python. I’m using Python here as it’s my preference over Node.js. Note this is done via a local Python editor, not within vRO.
Here I am fetching my AWS keys, regions etc from a config file and storing them in environment variable before using boto3 to fetch all EC2 resources, filtering the results looking for something called ‘machineABC’ before printing it’s ID to the screen (note that EC2 names are just tags on an instance, not an actual core attribute of an instance).
import boto3
import os
import json
# Set all the AWS variables from config file as OS environment variable
os.environ['AWS_CONFIG_FILE'] = os.getcwd() + '/awsconfig'
# Select the EC2 service to use
ec2 = boto3.resource('ec2')
# Define a search filter to use when retrieving objects
filters = [{
'Name': 'tag:Name',
'Values': 'machineABC'
}]
# Fetch all the EC2 services instances filtered with the above filter spec
instances = ec2.instances.filter(Filters=filters)
# Loop through returned EC2 instances printing the instance IDs
for instance in instances:
print('Instance: ' + instance.id)
This is all OK however it’s not dynamic in nature and it doesn’t conform to what vRO is expecting. A vRO action that is executed via the Python runtime must be supplied as a function. Additionally the function must have 2 defined inputs (“context” and “inputs”) so that it can accept the context of the workflow run as well as any external inputs that need to be passed into it. This means my code now looks like this with the majority of my functionality now contained within my newly defined “findEC2byName” function.
import boto3
import os
import json
# Set all the AWS variables from config file as OS environment variable
os.environ['AWS_CONFIG_FILE'] = os.getcwd() + '/awsconfig'
def findEC2byName(context, inputs):
..
..
As I’m now passing an “inputs” object into the function I can use this to make my function dynamic, able to search for an EC2 instance by a supplied name rather than a statically coded one. Here I am saying that within “inputs” there will be property called “instanceName” whose value I will use in the filter definition.
def findEC2byName(context, inputs):
# Select the EC2 service to use
ec2 = boto3.resource('ec2')
# Define a search filter to use when retrieving objects
filters = [{
'Name': 'tag:Name',
'Values': [inputs["instanceName"]]
}]
# Fetch all the EC2 services instances filtered with the above filter spec
instances = ec2.instances.filter(Filters=filters)
# Loop through returned EC2 instances printing the instance IDs
for instance in instances:
print('Instance: ' + instance.id)
Now I have the basics of my Python code sorted I need to test it locally on my machine to make sure it functionally works. To do that I have created an input object called “test” and populated it with a key pair value, the key being “instanceName” and the value being “Test” (Test is the name of one of my EC2 instances). Once my input is defined I can then call the function and supply my input to it. If I execute the code then my matching instance ID is printed. Note that the lines that define the input and call the function (lines 25 and 26 below) are only to test the function and are not required by vRO so should not be saved to the Python file. I have also removed “context” for this test.

But What About vRO
Well we’re getting there. Now that my code has proven to work I need to package it up (making sure the function inputs are correct and any test code is removed) along with the boto3 module and my AWS confg file. This means placing the python file and AWS file in an empty directory and creating a sub-directory in the same directory called “lib” which should like as follows.

The next piece of the puzzle is to install the boto3 module into the “lib” directory using “pip”.
pip3 install boto3 -t /your_directory_path_here/lib
Once everything is installed then the 2 files and the lib directory can be zipped up together before being imported into vRO. When a vRO action is created the zip file can be supplied by changing the runtime option to “Python 3.7” and then selecting “zip” from the associated drop down list as shown below.

Once the zip file has uploaded the entry handler needs to be specified. This is the name of the python file and the name of the function joined together with a period. This makes my handler here “findEC2Instance.findEC2byName”.

On the right-hand pane of the action inputs and runtime resources can be specified. My function uses an input call “instanceName” so I need to add it here as an action input.

I’m going to ignore the return type for now but we will come back to that shortly.
As this is a vRO 8.x action it can be executed independently of a workflow for testing, using the “run” option although there’s no previous execution history maintained. This is great for just testing the core functionality of n action works OK before integrating it into any workflows.

The log shows that the code found and printed the same EC2 instance name as when I ran the code locally from my laptop outside of vRO.
What About Returning Objects?
Well this can be somewhat troublesome. The core issue here is that vRO doesn’t understand native Python objects. It accepts only returned objects that can be converted into a JSON serialized string or are basic types such as string, number etc. If the Python object you wish to return back to vRO cannot be converted to a JSON object without error then the vRO code execution will fail with a serialization error.
So lets look at the Python code so far to illustrate this point. Right now I have a function that matches one or more EC2 instances based on a name tag in AWS but what if i want to return all the matches to vRO so I can do some other processing on them. Well the first thing I am going to do is modify the Python script on my local machine.
In the highlighted lines you can see I have told my function to return all the matched EC2 instances by converting my EC2 Collection into a Python list (i.e. an ordered and changeable array). In addition I have then made sure that I can access the list using an index number and gain access to the attributes of an EC2 instance within the list by printing the value of an attribute to screen. In this case I want to get the root device name for my EC2 instance.
import boto3
import os
import json
# Set all the AWS variables from config file as OS environment variable
os.environ['AWS_CONFIG_FILE'] = os.getcwd() + '/awsconfig'
def findEC2byName(inputs):
# Select the EC2 service to use
ec2 = boto3.resource('ec2')
# Define a search filter to use when retrieving objects
filters = [{
'Name': 'tag:Name',
'Values': [inputs["instanceName"]]
}]
# Fetch all the EC2 services instances filtered with the above filter spec
instances = ec2.instances.filter(Filters=filters)
# Loop through returned EC2 instances printing the instance IDs
for instance in instances:
print('Instance: ' + instance.id)
return list(instances)
test = {'instanceName': 'Test'}
result = findEC2byName(test)
print(result[0].root_device_name)
By executing my function with my test lines I can see everything is working as expected.

Before I get anywhere near trying to get my list back into vRO I’m going to test whether it passes a serialization test by asking Python to convert it into a JSON string using “json.dumps()” which is the same method that vRO will use. If something fails here then I know it’s definitely not going to work in vRO.
You can see from the screenshot below that my list of EC2 instances cannot be converted. The same result happens if I concentrate on a single EC2 instance rather than the list by using “json.dumps(result[0])”. The structure of an EC2 instance cannot be converted into JSON.

So what I have proved here is that if I want to get information from my Python function into another vRO action, workflow, script element etc then I need to place that information into a different format that can be serialized. To do this I’m going to build a Python dictionary object for every EC2 instance I match and then add each dictionary to a list object. It will be the list object that I then return from my function. I’ve added a few extra bits of info to my dictionary as well such as IP address and root device name.
import boto3
import os
import json
# Set all the AWS variables from config file as OS environment variable
os.environ['AWS_CONFIG_FILE'] = os.getcwd() + '/awsconfig'
def findEC2byName(inputs):
# Select the EC2 service to use
ec2 = boto3.resource('ec2')
# Define a search filter to use when retrieving objects
filters = [{
'Name': 'tag:Name',
'Values': [inputs["instanceName"]]
}]
# Fetch all the EC2 services instances filtered with the above filter spec
instances = ec2.instances.filter(Filters=filters)
# Create an empty list for storing new return objects
returnList = []
# Loop through returned EC2 instances printing the instance IDs
for instance in instances:
print('Instance: ' + instance.id)
ec2Dict = {
"instanceId": instance.id,
"rootDeviceName": instance.root_device_name,
"privateIpAddress": instance.private_ip_address
}
returnList.append(ec2Dict)
return returnList
Now when I execute my code and add a few lines to call the function, print its result and pass that result through json.dumps() there are no errors returned. I’ve

Now I can package my Python files up into a new ZIP file and use it to update my vRO action. Once updated I need to set the output definition of the action. The corresponding Javascript object to a Python dictionary is a “Properties” object and I am returning a Python list containing one or more dictionary which is an array. So my return type is an array of Properties objects.

Lets Go For a Test Drive
To quickly test the action functionality I can drop it into a vRO workflow and hook up some inputs and attributes. I’m also using a script element to enumerate the content of the array so that I can validate its contents have made it across into vRO successfully.

If I look at the content of the array in the attribute of the workflow run I can see all the attributes I added to the dictionary in Python together with the related values of my EC2 instance.

Conclusion
Hopefully it gives you an understanding of what you can and cannot do using vRO and the Polyglot functionality, in this case with Python. It’s not the answer to everything but it certainly enables you to take existing non-Javascript content into vRO for specific use cases.
Pingback: Newsletter: March 27, 2021 – Notes from MWhite