This is the 2nd of a 3-part blog on Intelligent Process Automation (IPA), written by our CEO Dr Steve Sheppard.

Part 1 focussed on setting the right expectation for automation across the organisation, finding good opportunities and then prioritising. Part 2 covers intelligent design, implementation, delivery, optimisation and maintenance. Part 3 covers the use of artificial intelligence with process automation.

As a reminder, Intelligent Process Automation brings together the fields of Robotic Process Automation (RPA) and Artificial Intelligence (AI), encouraging the use of AI technologies such as natural language processing and machine learning within process automation. ‘Hyper-automation’ (Gartner) and ‘Digital Process Automation’ (Forrester) are other similar terms used to describe the combination of RPA with other technologies, such as AI, to extend the scope and scale of automation that can be achieved.

Many organisations have endeavoured to use RPA within their business to automate processes but with mixed levels of success. A significant number have not achieved the scale of benefit expected, or the RPA industry promotes as being achievable. This blog series highlights some of the key challenges that occur and provides a few pointers on how to overcome them.

Note, RPA isn’t the only technology for process automation, it can be achieved through BPM platforms, low-code platforms, traditional middleware and other technology solutions. Although there is a focus on intelligent RPA within these blogs, many of the principals covered are also appropriate to these other technologies.

If you have any questions, feedback or want to understand more about how Combined Intelligence can help you on your intelligent process automation journey then please get in touch via

Process Automation – Intelligently

In blog 1, I discussed setting a realistic expectation for automation, how to identify good automation candidates (opportunities) and then prioritising. In this blog I focus on the remaining stages of process automation, from technology selection, through design, to operation and beyond. I’ve tried to highlight various key challenges and issues to consider but within the constraints of a not too lengthy blog I can’t cover everything. If you want to know more, provide feedback or generally just discuss the topic feel free to get in touch (I can be reached via the email above or feel free to connect with me via LinkedIn).

Technology Selection

Selecting the optimal automation platform will be key to achieving both rapid as well as long term success. It may also be closely associated with the selection of a delivery partner who (hopefully) has the necessary skills to help get you going or support you throughout your programme. There are different technology choices, e.g. low code vs RPA, as well as platform options (for RPA, UiPath, AA, Blue Prism, Microsoft or an increasing number of others) and platform location options e.g. vendor cloud, private cloud, on-premise etc. Although the big three RPA platforms, for example, have a lot of capabilities in common they do have strengths and weaknesses. They also have different pricing models which will be a big influencer for many organisations.

Many of the vendors are also building out their platform capabilities, for example, adding AI functionality. The use of these extensions or the integration with external capabilities, in the case of AI from the likes of Microsoft, Google, IBM, Amazon etc will also need to be decided at some point (although not necessarily day 1).

In summary, avoid selecting technology based on the supplier with the biggest marketing budget / shiniest demos (they’re investing a lot in this at the moment) and put some due diligence into this decision (consider getting some external guidance, for example, from Combined Intelligence!).


Personally, I’m a big fan of design thinking. Rather than diving into implementing a process automation based on an as-is process diagram, do some design thinking first. There is a tendency for some to think that an agile approach to implementation (see below) means that you don’t do upfront design because this is often associated with traditional waterfall approaches. This is actually a misunderstanding of agile and a bit of an excuse for developers to plough straight into implementation. I’m not advocating the generation of large paper documents and the time spent on design needs to be effective and proportional to the complexity of the process to be automated. Some of the reasons (there are more) to do good design include:

  • End-to-End Process – If this is a sub-process of an end to end process take a look at this end to end process and consider if changes elsewhere could affect this automation. For example, improving quality of data into the sub-process could increase the effectiveness of automation (on the downside, if the risk of changes outside the sub-process are high it could impact the reliability and effectiveness of this automation considerably). Consider also how the automation of this sub-process might fit with an automation of the end-to-end process. It may be better to consider reengineering of the end-to-end process before automating a sub-process that may become redundant or need mayor refactoring soon.
  • To-Be Automated Process – An automated process doesn’t necessarily look the same as an as-is human process even when using UI automations. Consider how the bot can be efficient in how it interacts with systems (e.g. minimising unnecessary navigation which might be a logical flow for a person but not a bot). When batch processing, it may be more efficient to run multiple interactions system by system (even spread across bots) rather than taking one process execution across multiple systems at a time. Process execution order also doesn’t need to be identical to the as-is processes as long as the outcome remains as expected.
  • Reuse – There are often elements of a process that are the same or very similar to elements in other processes. Implementing these as reusable workflows will speed up future process implementations. It also enhances maintainability by reducing the number of implementations that need to change when the dependent system changes.
  • Exception Handling – If an automation isn’t going to be 100% successful 100% of the time (fairly typical) then it means there will be exceptions to handle. This will often involve handing over to a person. Consider how this is going to happen and how that person (in operations not development!) will be notified of the exception and then provided the necessary information plus an effective mechanism to process the exception (possibly even bot assisted).

There are also various other factors to consider such as security, data protection, performance, scalability, robustness to external system availability/reliability, standardising logging (for troubleshooting, auditing, security, insight…), testing etc. It’s good to use a design checklist/template that include all these factors so that they are consider for each process design (get in touch if you want some assistance generating one). Over time you will create standard/reusable approaches/designs/implementations for many of these factors which’ll will speed up the design process considerably.

Implementation and Deployment

Most teams will follow an agile methodology, often using scrum or Kanban. That’s fine if agile principals are followed properly which, unfortunately, is often not the case but that’s a subject for a different blog! One aspect I will pick up on here is around delivering incremental functionality. Each sprint should lead to the completion of a set of working (tested!) functionality if doing sprints properly. Additionally, if you work collaboratively with the operations team, you should be showing them (and getting feedback on/approval off) your automations during implementation. Enabling efficient adjustments during rather than after work has, in theory, been completed. If they are really onboard there will often be value in taking these incremental tested implementations live in a controlled fashion (without having to wait for the full process to be automated). This can deliver partial value early and identify issues in increments rather than a big bang at the end, particularly for big complex process automations.

RPA development isn’t the same as software development (it can include some software development though) but many of the principals and objectives of DevOps hold. Version control, continuous integration/deployment, automated testing etc are all important aspects of an efficient and quality RPA development lifecycle. Automated testing (potentially including test driven development) is an interesting area as it provides the ability to reuse the automation platform technology to automate these tests. Just consider how you ensure a level of independence between who / what is doing the testing and who has done the implementation (relying on developers to fully test their own implementations has never been good practice).

Typically, you’ll run several environments supporting development, test and production. Your implementation should be broken into component parts so that you can control what you deploy across environments (not deploying everything each time). The automation platforms themselves will typically provide functionality to support deployments between environments but you may choose to add a layer of automation to support CI/CD/Automated Testing. Post deployment verification testing is important even if that can only be done by monitoring controlled initial live process executions.

There’s a lot more to implementation and deployment that I could comment on but space in this blog won’t allow. The last point I will touch on is around the skills, experience and thought processes of the automation team. The automation platform vendors push the principal that almost anyone can implement automations. There is though, a big difference between being able to implement a simple automation for personal use vs implementing a complex end to end process automation that’s going to run automatically thousands of times a day. Create a team with the right thought processes (able to consider the bigger picture), invest in their skills and share experience across and outside the team.

Optimisation and Maintenance

Many process automations will initially go live without being able to fully automate the process 100% of the time. This will often be expected (i.e. in the specification/design) but there will also be cases when unexpected failures occur (hopefully handled gracefully by exception handling!). This could be due to scenarios (e.g. combinations of input data) that aren’t handled fully, unexpected behaviour of other systems, known conditions that either can’t be interpreted by the current implementation (e.g. free form text) or the operations team require to be handled by a person (e.g. sensitive outcomes or not currently possible to encapsulate into rules). What this means is that there is often a need for optimisation post go-live and this needs to be planned for (some of these scenarios can also be helped by AI but that’s for blog 3!). There’s always a law of diminishing returns to factor in but an investment in optimisation will often improve the perception of and the actual performance of the solution (both important). Performance optimisation will also become an increasing issue as the scale of automation within the platform grows, this may require future refactoring of implementations.

The automation platform will quickly become a business-critical system with the people who previously perform the processes manually no longer being available to take over if the automation doesn’t work. This means automations need to run reliably and continue to achieve the same level of performance (or get better). Unfortunately, systems do change (particularly the UIs), the requirements on the process evolve (with business needs) and load on the automation platform will continue to grow. This requires proactive maintenance and not just reactive support, including planning in changes due to external system upgrades as well as changing business needs.

As more and more automations go live the amount of time needed for optimisation, maintenance and support will continue to increase. This needs to be factored into the operating model and resourcing of the automation team.


That’s it for this blog, I hope it was an interesting follow on to blog 1. Feedback always welcome. Blog 3 covers the use of Artificial Intelligence in process automation which is a rapidly expanding opportunity to take automation further.

If you want to understand more about any of the aspects covered by this blog or want to provide feedback please contact me via Please also subscribe below to our newsletter so that you can receive our latest news, blogs and articles direct to your inbox. Or alternatively follow us on LinkedIn.

You might also want to read about the Automation Journey and the initiatives we’ve created to help organisations accelerate through this journey.