Power Automate:
what I wish I had known

Microsoft Power Automate is one of those products that is sometimes misunderstood.

For some it represents an extension of SharePoint that provides easier access to data stored in a list. 

To others it represents the ideal tool for extending Dynamics 365 functionality.

In reality, Power Automate is a true gem of the Power Platform, enabling you to automate workflows quickly and seamlessly with your application.

Much of its success depends on its wide range of available integrations, connectors and templates.

Although you can find a lot of training and support material on the Web, in this article I want to share you everything I wish I had known from the start.

In the following examples I will consider both Default and Premium connectors, omitting licensing considerations

Therefore, let’s fasten our seat belts and get going.

One of the premium connectors I love most is the HTTP connector.

The HTTP connector use REST architecture( Representational State Transfer), which allows users to interact directly with data via Web requests.

There are three kind of HTTP connectors :

Let’s make an example with the case where we receive HTTP requests, because it is the most interesting and the most sensitive from a security point of view.

Imagine that you are implementing a Power Platform solution to improve the managing of approvals of activities such as vacation plan, or supply purchase requests, or work status progress. 

Assume further that the approval request starts from a management system outside the context of Power Platform and Dynamics 365.

Assume further that for a licensing argument you have decided to adopt SharePoint lists to manage the progress of requests.

What you will have to do, once the request is submitted, is update the list based on the status of the request. 

Since we are not assuming the use of Dataverse in this case, otherwise you could have leveraged the WEB API and perhaps handled some validation tasks in Post-Operation, we need to find an alternative solution.

One good solution is just to use the HTTP connector when a request is received as a flow trigger.

This connector allows you to define the verb of the call, add a validation template for the request, and the address will be created automatically after saving.

All good but be carful!

The address will be exposed on the Internet and therefore can be retrieved by anyone even by some malicious person.

You will therefore have to introduce a level of security into your flow by implementing custom logic to validate the parameters within the flow.

One implementation option is to include a security key that you will share exclusively with the developer.

The easiest and quickest way to use this connector securely is to add a key in the header.

As you can see in the image above in the case where the condition is not met you can handle the response indicating that the action is not allowed.

Clearly the complexity of the value you will use as the PersonalSecretKey is critical.

You could also use a simple string like “hakuna matata” but at your own risk.

To increase the security of your key take advantage of Azure’s services.

Okay, we can consider ourselves fairly protected however there is one more aspect to consider using the HTTP connector.

The following advice is not only applicable for the HTTP Connector but can come in handy even if you are using a Custom Connector.

Imagine the scenario where you have created a Custom Connector to consume the REST API developed by the Backend team. 

It may happen, for various reasons, that the services are offline or that new features have not yet been released.

As a result, this could slow you down considerably in the development of the flow because in the step where that appropriate action is called, the flow will fail.

Therefore, we have the need to prevent this issue.

Again there are several solutions that can be adopted.

However, the one that is most interesting is to set up a static response.

To do this you will just have to open the action menu.

Then you will have to click in the option Static Result .

Now you need to put in the body, which I’m sure together with the backend team you will have defined.

Doing so will enable you to pursue your own development and testing.

Testing, testing, testing …

Testing one’s flows is critical and performing Debug tasks is not always easy, especially in a context such as Power Automate or Power Apps.

Nevertheless, we can do something using Compose.

We can imagine Compose as a console.log() in Javascript, i.e., a way to have messages that allow us to better understand what is going on.

In fact, using this action you can enter any expression as input and verify that the output meets the expected value after it is executed.

Right now you might be asking yourself, but why not use a variable instead of compose ?

Well basically you need to know that Compose is much more flexible. 

You can receive any kind of value as Input and apply your expression getting a dynamic output.

Also, Compose can be called at any point in the flow while variables must be initialized first and this procedure cannot take place within a scope.

By this I do not mean that you should always prefer Compose to variables; everything has its own context and meaning.

Clearly Compose was not created to solve this Debug need but I personally find it convenient to adopt.

Using this Debug technique will allow you to work more smoothly even with medium to large flows. 

In fact, it may happen that even though a cloud flow was created for a specific purpose, as time and requests go by it becomes quite large, sometimes too large.

To save our lives and prevent a cloud flow from getting really big, it makes sense to think about decompressing them.

To create a child stream you simply create a new instant stream and choose manual trigger.

At this point, once you have added all the steps to your flow you can move to the parent flow and add a new step where you call up the flow you just created.

Adopting this approach with child flow help you to have circumscribed solutions which will be more mantainable than before and you could improve the troubleshooting as well.

I’d like to leave you some suggestion to develop better child flow:

  • always build in the solution
  • use a properly name to identify the parent and the child
  • use account service for connection
  • be aware of solution limitations
  • adopt a KISS ( Keep It simple, stupid) approach

However you have to consider some limitation of child flow:

  • Your Data Loss Prevention ( DLP ) must allow the use of HTTP connector
  • You must create the parent and child flow in the same solution
  • Flows in solutions don’t support delgated authentication

In addiction I would like to illustrate a different way to achive the same result, because the child flow cannot perfom good when you have a huge parent with multiple parrallel connections to manage.

In this case you can split your flow and create some child using Dataverse Event.

To make it possible you have to create a Dataverse Custom APIs.

Custom APIs in Dataverse is a great way to build your own API messages.

In this case you don’t need to attach a plugin to our Custom API because you only need to raise the event.  Then you will create your flow with “when action is performed” dataverse trigger.

Let’s see how to make it possible.

First of all you have to create your Custom API. You can use plugin registration tool or you can create one using powerapps editor, in this article I’ll show you this scenario.

Assuming you have already created a solution. Click on New and look for custom api.

After that you have to fill out the main form.

Without going into too much detail because this topic would deserve a dedicated post, I will point out a few basic fields.

The first one is binding type.  As you can see in this case I chose Entity, this is because we are going to create this event based on a table record.

You can use the following binding types:

  • Global – When the operation does not apply to a specific table
  • Entity – When the operation accepts a single record of a specific table as a parameter
  • EntityCollection – When the operation applies changes to a collection of a specif table

In the value “Bound Entity Logical Name” I have entered the logical name of the table I prepared for this demo.

Allowed Custom Processing Step Type attribute allows us to control the custom message. 

  • None – No custom processing steps are allowed
  • Async Only– Only Async processing steps are allowed
  • Both Async and Sync – Custom processing steps are allowed
Here you can read more details about parameters.
In the end, you can observ that I did not add a plugin type value. 
 
That’s because we don’t care about the logic of this api but we only want to raise an event and catch it in the flow.

After click on “save & close” button our custom api is ready to use. You can add request parameter and response paramenter as well. 

Let’s create our parent. For simplicity I used a manually trigger but you can consider any other trigger like sharepoint, projectonline ecc..

In the parent I created a new record of Demo Table and then, thanks to the action ” Perform a bound action” I created the event specifying the Action Name and the Row ID. In this case the action name is exactly the custom api which we created before. 

Bound actions target a single table or a set of rows from a single table.

Now let’ see the child flow.

For the child the trigger is ” when an action is performed“. 

Therefore, when a parent will raise the custom api event this flow will be trigger and then it will be execute its own actions.

As mentioned earlier, this approach allows you to improve the performance of your flows and, more importantly, not keep resources tied up as is the case with the parent-child patter seen earlier. 

In any case, both soluzons allow you to circumscribe functionality and facilitate debugging to maintain a focused approach to maintainability .

This means that you must allow future developers who will need to intervene in the flow the right information in order to allow either immediate intervention or improvement of the flow.

How to do it ?

You can start by simply the naming of individual actions.

Not only that, you can enrich your actions by insert comments that will allow immediate understanding of the step.

These little tricks will also allow other developers to handle malfunctions in a timely manner.

Because no matter how much care you may put into your flow, errors are always just around the corner.

So having actions in your flows that allow you to handle potential errors can greatly improve the user experience. 

Imagine you have a user who is using a Power Apps Canvas.

The application is designed to allow you to uplod a document into a document repository.

In the various actions it may happen that something goes wrong, so if this exception is not handled the user will find himself in an interdicted situation.

He may not guess whether the document was loaded correctly or something went wrong.

Regarding error handling in Power Automate I recommend that you use the Scope actione that allows you to define an execution context.

This approach will allow you to have within your flow a try-catch to handle and cyroscribe the error.

For everything to work properly you will have to set in the scope intended for Catch a run after.

The run after is nothing more than a setting that allows you to indicate when an action should be executed in relation to the outcome of the previous action.

To set it, just click on the 3 dots in the action and then click on Configure run after.

In this case, the run after was set by checking whether the previous action failed.

So it means that the Catch scope will only be executed if there was an error in the Try scope.

Keep in mind that if you use the scope it will only be necessary that one action within it fails to activate the Catch.

As you can see, you can also add other conditions.

By default the run after is set to ‘is succesful’. 

Therefore, if you do not handle possible errors, the user experience may be compromised.

A further factor to be taken into account is the handling of the pending stream, as you may come across streams that due to an unhandled error will run for more than 40 minutes.

Most of the time when this phenomenon occurs in a loop, such as apply to each, it means that the array received as input is empty

However, if you are using HTTP calls, you can set a timeout and handle the call retry especially when this is not necessary.

To set the timeout you will have to comply with the ISO 8601 format.

For example PT30S, you set a timeout of 30 seconds. Or PT1M, you are setting a timeout of 1 minute..

Although up to this point we have seen techniques for handling particular situations, we cannot overlook the issue of performance.

In particular, most of the time cycles such an “apply to each” can prove to be a bottleneck.

If you want to increase the performance of your flow, you can enable parallel loop execution.

 

Parallelisation of cycles can help improve performance and user experience.

However, this is not a silver bullet that solves all problems.

Before enabling it, do not forget that you will have multiple instances running at the same time, so you must be careful if the order of execution must be respected.

For example, if you retrieve an ordered list of people you wish to put in a text file and you have enabled loop parallelisation, you cannot be sure that you will write that file in a specific order.

Furthermore, another aspect you must take into account is the destination source. 

Assuming you have to write a record to a database, you can rest assured that current tools are generally able to handle simultaneous tasks without problems. 

However, this may not be true with other datasources that cannot handle too many simultaneous requests. 

In this case you will have to limit the number of parallel instances that can be executed.

Here you can find other limits related to parallelisation.

Besides performance, a very useful thing for your flow is to avoid unnecessary executions.

For example, if you use the event that an element has been created or modified as a trigger, the flow will be triggered at each update.

But you probably don’t want Power Automate to do something with every update.

This is the case when you can use the conditional trigger

As you can guess, the actions that make up the flow will only be executed when the trigger condition is met, as for example in the image below

Therefore, keeping the execution of the flow confined to particular conditions will allow you to avoid unnecessary calls.

Always keep in mind that every action behind the scenes is an API call.

A goal should also be to limit the number of actions that are introduced as a matter of user limits. 

In this regard, I would like to point out a way to save unnecessary steps.

Sometimes it may happen that actions are introduced to filter lists of data retrieved in previous steps.

But this process can be optimised. 

With determined connectors you can add the filter directly into the action.

One of these is the Dataverse connector.

As you can see, by expanding the advanced options, you will be able to filter your requests.

This will allow you to improve the efficiency of the flow.

To do so, simply add your OData condition to the Filter rows field.

However, you may have a need for more advanced filtering.

If you find yourself in this situation, you can use the Fetch Xml Query option.

In this field you can insert your Xml query and obtain more precise data.

If you have never had the opportunity to write Xml queries and do not feel confident in doing so, don’t worry because there is a tool that will make your life easier.

This tool is available within XrmToolBox. 

XrmToolBox is a Windows application that connects to Microsoft Dataverse and provides you with tools to simplify customisation, configuration and operational tasks. 

The tool in question is FetchXML Builder with which you can build your queries, test them and then move them later into your flow.