Process Intelligence Glossary
Since Process Intelligence is a new area for many users, the terminology can be very confusing. It doesn’t help that different vendors have different names for the same concept. In this glossary we try clear things up on our side at least by providing the definitions of some of the key terms you will encounter in Process Intelligence in general, and Agilon in particular.
This page is intended as a reference for specific terms. For a first introduction to the concept we recommend the what is process intelligence blog post.
To see the process deviations in any specific subset of data (a specific office, product line, month etc), it is helpful to compare the process with that of a reference subset (like another region etc). Instead of focusing on the absolute volume of cases and events, a benchmarking analysis presents relative differences, such as a certain event occuring 10% more often in one region compared to another. This enables comparison between actual process execution and best practice or designed process flows.
The fundamental unit that the process is performed on, such as an order, a ticket, a document etc. Everything that happens in the process flow is events and actions taken with the various cases. A process chart can be based on anything from a single case to an aggregation of millions of different cases.
A known value for each specific case, such as the Branch the case belongs to, the type of service performed, the date the case was registered etc. These attributes can be used to filter the analysis or generate charts for a specific subset. They are also important values for the root cause analysis.
Events are the fundamental building blocks of the process flow. It is usually an action or activity that has occurred in the process, like the initial creation of the case, printing of a document, sending of a shipment etc, but it can also refer to a target, such as the requested date of delivery. These targets are sometimes referred to as “virtual” or “milestone” in that they are not really activities performed by any user or system.
Similar to Case Attributes, Event Attributes are additional information about each specific event occurrence. Common examples are the user that performed or registered the event, the application used etc. These attributes can be used for further sub-setting or filtering of the process, or to visually distinguish the events performed by for example different departments, such as in a swim lane chart.
The Event Log is the primary dataset used in process mining to generate charts and analytics around the process performance. It has a very simple structure and only requires a case identifier, an event name or label and a timestamp. Optionally, it can also contain event attributes (such as the user or system performing each event), which can then be used to further filter or analyze the process.
An event occurence is one particular occasion when a particular event has happened. If for example we have 1,000 cases and the event “Delivery planned” happened for 20% of the cases, we would have 200 event occurrences for the Delivery planned event. Similarly, the event occurence rate would be 20%.
Since events occur at particular times and for specific cases, each combination of events will have occurred in a certain order. If “Delivery planned” happened before “Shipment label printed”, we say that the case has an event order of “Delivery planned before Shipment label printed”. This is denoted as “Delivery planned < Shipment label printed”. This is not to be confused with a Flow, because the event order is true even if other events occurred between the two events mentioned in the event order (such as if the full sequence was Delivery planned -> Picking completed -> Shipment label printed).
A measure of how uncommon a certain value is in the root cause analysis. The idea is that events and attributes that occur in all or a vast majority of the cases are not easily in our control, and therefore provide little analytical value when identifying improvement potential.
A sequence of events occurring in direct proximity, i.e. with no other event occurring between them. This is denoted as “First event -> Second event”. A Flow is not to be confused by an event order, which is true even if another event occurred between them.
Impact is a quantification of the consequence of a deviation. In its basic form it is expressed as a number of cases (a 10% deviation for 1,000 cases leads to an impact of 100 cases), but it can also be expressed in other measures, for example a monetary amount (such as the total value of delayed orders).
A measure of the potential achievable change in outcome if a deviation is corrected, such as the additional number of orders that would be delivered in time. This is the key output of the root cause analysis and the base measure used to identify the “low-hanging fruit” for any change initiative – i.e. the changes with the greatest ROI potential.
The final product or result of process analytics, either in the form of a report or presentation of findings (in the case of an ad-hoc analysis) or a continually updated dashboard or view where business users can consume the findings and recommendations of the analyst and/or the system.
The underlying algorithms that turn the raw data tables (event logs and case attributes) into a format suitable for presentation in flowchart form. Depending on the solution used, these algorithms can also prepare data for analytical functions (such as the Agilon root cause analysis).
Root Cause Analysis
The prescriptive analytics feature in Agilon that automatically identifies deviations in the process execution or attribute value distribution for any scenario the user selects. This is achieved by comparing the process model of the selected subset to a reference subset (such as comparing delayed orders with timely orders) and highlighting the key differentiating factors between them.