Architecting High-Performance Power Automate Workflows
Writer
Power Automate is a phenomenal tool for streamlining business processes without writing heavy code. However, as your data scales, how you architect your flows becomes critical. A poorly designed workflow dealing with just a few thousand records can consume hours of runtime and easily exhaust your daily billable action limits.
In this post, we will walk through a real-world scenario handling over 5,000 records, pulling from three separate data sources. We will look at why the “standard” looping method fails at scale, and then break down an optimized architecture that reduces a 5.5-hour workflow into a 16-second powerhouse, dropping the action count from 93,000 down to just 31.
The Scenario: Calculating Employee Bonuses
We need to generate a comprehensive Excel report calculating total compensation packages. The data lives across three disparate sources:
- SharePoint List: 5,000+ rows of employee data (Employee Name, Department, Annual Salary, Hire Date).
- Excel File (OneDrive): A lookup table containing department-specific bonus structures (Bonus % and Stock count allocations).
- External API: A Finance API that returns the real-time market value of Microsoft stock.
The Business Logic:
- Employees with less than 10 years of service receive no bonus.
- Total Package = Annual Salary + (Annual Salary * Department Bonus %) + (Department Stock Allocation * Current Stock Price).
Pattern 1: The Anti-Pattern (Why Loops Kill Performance)
The most intuitive way to build this is also the most dangerous for performance:
- Get the SharePoint items.
- Get the Excel rows.
- Call the Stock API.
- Open an “Apply to each” loop for all 5,000 SharePoint items.
- Open a nested “Apply to each” loop to check the Excel rows to find the matching department rules.
- Calculate the math using variables.
- Use the “Add a row into a table” Excel action to write the data.

The Result: Fetching the data takes seconds, but writing it takes hours. Because the flow processes one row at a time and loops thousands of times, this pattern takes 5 hours and 35 minutes to run and consumes over 93,000 billable actions, which will obliterate daily tenant limits.
Pattern 2: The Optimized Architecture (Zero Loops)
To achieve extreme efficiency, we must adopt a “batch processing” mindset. By leveraging Data Operations and HTTP API calls, we can execute the exact same outcome in 16 seconds using only 31 actions.
Here is the step-by-step masterclass on how to build it.
Trick 1: Maximize “Get Items” with Pagination
When querying large SharePoint lists, default settings will truncate your data. On the Get items action, click Settings and turn on Pagination. Set the threshold to a safe limit (e.g., 100,000) and set your Top Count to 5000 to ensure the flow pulls the entire dataset efficiently.
Pro Tip: Always use Top Count in tandem with Pagination. Relying only on pagination without a threshold can result in unexpected loops behind the scenes, slowing down query time.
Trick 2: Parallel Data Retrieval
Since the SharePoint list, the Excel reference file, and the Finance API do not depend on each other, they shouldn’t wait on each other. Right-click the node beneath your trigger and select Add a parallel branch. Run all three fetch actions concurrently.
To merge the flow back into a single thread, add your next action (e.g., Create file) below the parallel branches. Go to its settings, select Configure run after, and check the boxes for all three parallel fetch actions. The flow will now patiently wait for all data to arrive before proceeding.
Trick 3: Dynamic File and Table Creation
Instead of hardcoding a file, generate it dynamically.
- Use Create file (OneDrive). For the file name, use an expression like
concat('EmployeeData_', utcNow(), '.xlsx')so it creates a fresh file per run. (Pass a single space character as the file content to bypass the required field). - Use Create table and point it to the dynamic File ID generated above. Define your headers (Range
$A1:$G1) and name the tableemployee data.
Trick 4: Create a JSON Dictionary for O(1) Lookups

In Pattern 1, we used a nested loop to find department bonus rules. We are going to replace that loop with a direct dictionary lookup.
What is O(1)? In computer science, an O(1) lookup means that instead of looping through items one by one (which is O(N) time), your query points directly to the answer instantly by referencing a specific key—like looking up a word in a dictionary!
- Add a Select action. Point it to the output of your Excel bonus table.
- Switch the Map to text mode. Map the Department as the Key, and create a nested object for the values:
{"bonus": item()?['bonus'], "stock": item()?['stock']}. - Pass that output into a Compose action and wrap it in an expression to flatten it into a single JSON object.
You now have a clean JSON payload mapping departments to their rules (e.g., {"Finance": {"bonus": 0.1, "stock": 50}}). You can now query a department’s bonus instantly without looping.
Trick 5: Batch Transformation with the Select Action
The Select action is the most powerful data operation in Power Automate. It allows you to transform an entire array of data in memory simultaneously.
Pass your SharePoint items array into the Select action. Now, map your seven columns using expressions to handle the business logic on the fly:
- Choice Columns: SharePoint choice columns are nested. Use
item()?['Department']?['Value']. - Simple Math: To get the monthly salary, use
div(item()?['AnnualSalary'], 12). - Date Math: To find the years of service, utilize the
ticks()function to subtract the HireDate fromutcNow(), dividing the result by the tick-to-year conversion integer. - Dictionary Lookups & Complex Math: To calculate the bonus, query the Compose object we created in Trick 4 directly using the current item’s department:
mul(item()?['AnnualSalary'], float(outputs('Compose')?[item()?['Department']?['Value']]?['bonus']))
Because Select processes the entire array in memory, it executes data transformations for 5,000 rows in milliseconds.
Trick 6: Batch Uploading via the Graph API
The final bottleneck is the standard “Add a row into a table” action, which forces you to write data line-by-line. We will bypass this entirely using the Microsoft Graph API.
- Add the Send an HTTP request action from the Office 365 Users connector (this avoids premium HTTP connector licensing).
- Set the Method to POST.
- Use the Excel Graph API endpoint:
https://graph.microsoft.com/v1.0/me/drive/items/{file_id}/workbook/tables/employee data/rows(Note: You can use an expression likesplit()to isolate the File ID dynamically from your Create File action). - In the Body, pass a single JSON object with a
valuesarray containing the output of your main Select action:{ "values": body('Select') }
Bonus: Pattern 3 - Departmental Batching
What if you need a separate worksheet for every department? Rather than looping 5,000 times, you only loop for the number of departments.
- Get a distinct list of departments.
- Open an “Apply to each” loop that runs exactly 7 times (for 7 departments).
- Inside the loop, create a new worksheet, use a Filter array action to isolate only the employees in that specific department, and use the Graph API call to post that chunk of data to the newly created sheet.
This slight variation generates a beautifully segmented report in under 30 seconds.
Summary
When building enterprise-grade workflows in Power Automate, avoid loops at all costs when dealing with data transformations and system writes. By utilizing Parallel Branches, the Select action for memory-based data shaping, JSON dictionaries for lookups, and batch API endpoints for system writes, you can build workflows that are exponentially faster, dramatically more reliable, and completely safe from API throttling limits.
Related Articles
More articles coming soon...