Een titel toevoegen

RPA vs IDP: what’s the difference and how can they work together?

RPA vs IDP: what's the difference and how can they work together?

When navigating the tech landscape to look for automation possibilities, you’ve probably come across a multitude of acronyms and different technologies. It’s easy to become overwhelmed and to support you in this journey, we decided to write this blog post. 

In this blog post, we would like to introduce you to RPA and IDP, how they differ from each other, but most importantly: how they can enhance each other’s strengths and help you with end-to-end automation of business processes.  

Manual processes are highly inefficient, error-prone, and expensive. Solving this problem might seem like a logical thing to do, but we want to highlight that choosing the right technology (or technologies) is important.  

What is RPA?

RPA or Robotic Process Automation exists of scripts that automate routine, predictable tasks. RPA bots can learn, mimic and then execute rule-based business processes. By showing the RPA robots what to do, you can let them do the work for you. Of course, they are not physical robots but software robots.  

At Metamaze, we love to refer to RPA as the metaphor for the hands and feet of a human that is performing the actions that are needed. 

Examples of what RPA can do: copy/paste information, log into web applications, scrape data, connect to a system API, extract data from structured documents, follow “if this, then that” rules, … 

You can imagine how valuable it is to implement solutions like RPA to automate (parts of) your business processes. Now, the most important thing to understand in RPA vs. IDP is that RPA has zero intelligence built into it. It can read data, but not understand or interpret unstructured data and therefore it’s only able to accomplish what you’ve programmed it to do.  

RPA is perfect for well-defined tasks like rekeying already-digital data. When you look at RPA to automate documents, you’ll be fine when it comes to documents with fixed layouts. The robot can be trained to read the layout. But when it comes to complex documents where layouts can change, RPA alone will possibly not be the best solutions. It would take too much resources to keep on updating all the templates. To do so, you’ll need to combine it with an intelligent solution like IDP.

What is IDP?

IDP or Intelligent Document Processing is a tech solution that helps organizations to classify and interpret data on documents. It’s intelligent because it uses artificial intelligence (AI) to do so. AI trains machines to mimic human intelligence so they can complete repetitive or complex tasks for us or predict outcomes. IDP uses different AI-technologies to automatically process documents. Machine learning (ML) is the process of using patterns in data to ‘teach’ the machine, so its performance and prediction become more effective and accurate over time. Natural language processing (NLP) is the branch of AI focused on leveraging ML techniques so the machine can understand and interpret human language. So, in the case of Intelligent Document Processing, machine learning and natural language processing are used to train a computer to simulate a human subject matter expert’s review of a set of documents.  

The result? 
A computer capable of understanding the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.  

Because of its intelligence, organizations are able to automate the classification and data extraction from every possible document and email type, without the need for recurring or fixed lay-outs. 

Like RPA, we also like to use a metaphor to illustrate IPD. We use the metaphor of the brain to illustrate the difference between RPA and IDP.  

As you might already realize right now, both technologies work to achieve a common goal: decreasing manual data processing.

The highest challenge organizations with a lot of documents face is getting data from documents into their systems. So combining RPA with an intelligent solution like IDP is a highly effective strategy to ensure success. RPA needs intelligence, and AI needs automation to scale

RPA needs intelligence, and AI needs automation to scale.

Combining RPA and IDP

End-to-end automation by combining RPA with IDP

Combining RPA with artificial intelligence (AI) is often referred to as Intelligent Automation (IA). By combining both technologies, automation of any complex business task is possible. By adding AI technologies (like IDP) to RPA, the business scope and ROI can be further increased.  

Benefits of combining RPA with IDP

Reduce operational obstacles 
Combining IDP with RPA helps you to integrate legacy systems and overcome other functional barriers. 

Automate any business process, end-to-end 
Augmenting RPA with IDP expands the possibilities of business process automation to include nearly any scenario.  

Reduce operational costs 
AI-driven IDP + RPA improves straight-through processing (STP) by continuously learning from human feedback 

Other benefits of automation

Enhance customer experience 
Improve customer satisfaction by delivering faster response times, greater accuracy, and more consistent results.  

Liberate employees 
With Intelligent Automation in place, your employees can focus on objectives that use their unique human skills. This results in higher employee satisfaction and using your workforce strengths.

Use cases of IDP combined with RPA

Discover these use cases where we partnered together with an RPA solution provider to ensure end-to-end process automation. 

Automating invoice processing with IDP and RPA

Automating accounts payable with IDP and RPA

Let's talk about your business case

I am Jo Cijnsmans, strategic partnership manager and account executive at Metamaze. I’ve worked with a lot of organizations in finance, insurance, logistics, … to help them automate complex document workflows. In my role as partnership manager, I have strong relationships with RPA providers. Let’s discuss your business case in a short introduction call. 

Een titel toevoegen (1)

We benchmarked ourselves against Microsoft, Google and Amazon

We benchmarked ourselves against Microsoft, Google and Amazon.

The past months, we’ve been working on some benchmarking experiments at Metamaze. We wanted to know more about the accuracy of our platform and how it’s measured against the big technology players like Microsoft, Google and Amazon. 

We performed a benchmark report on invoice models with Metamaze, Google Document AI, Microsoft AI builder and Amazon Textract

The experiment

We used invoices for the benchmark experiment. An invoice model was trained on all our invoice data except one dataset. This enabled us to replicate how a pretrained invoice model works.

Each provider extracts different entities, which is why we have different metrics for Metamaze models. When we compare with Google Invoice model, we only included the entities that both the Metamaze model and the Google model extract, and discarded the others. Same for the other providers: we always only take into account the entities that we have in common with the provider. List of entities per provider is included in the detailed report (see below).

The results

As you can see, Metamaze’s accuracy is performing better in every scenario. 

Download detailed report

Want to know more details about the experiment, download our benchmark report for free.

qmoizjef

Keep improving your models with incremental learning

Retrain your models fast with incremental learning

In our previous blogpost, we talked about the importance of good data and annotations to train the model. Our data-centric AI approach is central to the development of our product. That’s why our Machine Learning team has been working hard on implementing incremental learning methods in Metamaze. In this blogpost, we’ll give you some insights on how that improves and accelerates the training of your solution. 

From traditional machine learning to incremental learning.

If you want to implement an Intelligent Document Processing solution, you need trained Artificial Intelligence models that are capable of automatically recognising the information you need from your documents. In order to train those models, annotated data is needed. We refer you to our previous blogpost to learn how we tackle data input to train the initial models.  

It is often assumed that a “good” training set in a domain is available a priori. The training set is sogoodthat it contains all necessary knowledge, and once learned, it can be applied to any new examples in the domain. Unfortunately, many real-world applications do not match this ideal case. Trained models need to be updated every once in a while, to learn new cases that differ significantly from the already learned ones. This can be achieved by re-training the models with the old and the new data. While the training dataset grows, the training time increases, and at Metamaze we believe this is not sustainable. That’s where incremental learning comes in. 

What is incremental learning?

Incremental learning is a method within the machine learning domain that allows models to continuously learn and extend the existing model’s knowledge, by adjusting  what has been learned previously based on new examples. 

Why we integrated incremental learning into Metamaze.

With our human-in-the-loop automation flow, the users are asked for validation of predictions in case the model is not sure enough or misses an important piece of information. All manual corrections and confirmations on the production data are added to the training dataset, to further improve the model. Once these corrections are learned by the model, the same type of errors cease to occur in the automated production flow.  

In order to incorporate the new data in the AI models, the users need to trigger a training. They can choose between a Full or an Incremental training. This means they choose between training a model from scratch with all the available training data, both the data the model already knows and the new data, or updating an existing model with only the data that the model hasn’t learned yet.  

So basically,you can continue training a model starting where you left off. This has of course some advantages. 

Drastically shorter training time

Training a model from scratch can take several hours, or even days, depending on the size of the dataset. This means that the users have to wait quite a long time before reaping the benefits of all their manual corrections. With an incremental training, the already trained model is simply updated with only those data samples it previously couldn’t process automatically. These trainings are much faster, given that the datasets are small and the model already knows a lot about the documents to be learned, it simply needs to update its knowledge on some details.   

Train new models faster by adding new entities

Another advantage is that you can add new entities. For example: if you have a model that recognizes name and invoice number, you can use the same model as a start to train a model that recognizes the name, invoice number and invoice date, without having to start all over again. 

This basically means that you can “recycle” models in other projects. Of course, the biggest advantage is that the annotation effort is much lower: name and invoice numbers don’t need to get annotated anymore because the model can already recognize them. Only invoice dates will need to be annotated. 

This reduces a lot of valuable human effort. 

Incremental training settings Metamaze

CONTACT US

Book a demo today

Curious how Metamaze works and what it can mean for your enterprise? 

Een titel toevoegen

Seasonal spikes in claims cost insurers a lot of money

Seasonal spikes in claims cost insurers a lot of money

Every year, insurers must handle seasonal spikes of insurance claims. Mostly during storms. Some Belgian insurers are still handling insurance claims months after storm Eunice hit Belgian territory. 🤯 Handling all these claims, usually results in hiring an extra temporary workforce. But could it be different? 

Seasonal spikes

There can be predictable seasonal spikes in the volumes of documents being sent. Of course, property damage claims during hurricane season are the first ones that come to mind. But even in summer, several big-name insurers in the UK have reported subsidence incidents are up to 20% higher than typical summers, causing spikes in insurance claims. With summers getting hotter due to global warming, more and more houses with older foundations are causing subsidence. A spike in Cyber insurance claims has even been seen due to COVID-19, where the risk of cyber-attacks raised as more people have been working from home and relying on (potentially less secure) digital tools.  

But there are also other factors at play that aren’t quite so easy to see coming. Changes to the law might affect compliance regulations dictating document storage. But the volume can also be in flux even when there are no external factors. Sometimes it’s just a matter of certain claims requiring larger volumes of paperwork: one claim might only need a few single page documents while another includes a file that’s nearly 100 pages. Depending on the time of year and the size of the business line, the number of incoming documents could be in the thousands, or the hundreds of thousands. 

Hiring extra people

Car insurance, property insurance, renters, health, life flood and fire: all of these products are distinct from one another in terms of what they cover, each with their own set of rules and regulations as to how documents are processed.  

Especially for the processing of incoming claims, insurers are hiring extra temporary employees. Manually sorting and distributing all this information, even if it’s done over electronic channels, can be exceptionally complicated and prone to errors. Those who do the actual sorting need to be trained, which on its own isn’t so bad but becomes even worse when you factor in seasonal spikes in the amount of work to be done. It costs the same to train a seasonal employee as it does one who’s a full time, an investment that bears little return when the seasonal worker is gone within two or three months.  

Can it be different?

Of course: yes! 

Automating the interpretation and extraction of incoming documents could fix all of this. When a document enters the insurer’s organization: a human will read it, try to understand it and extract the information that is relevant for further processing. An IDP (Intelligent Document Processing)-platform can do exactly the same.  

Based on Artifical Intelligence, Machine Learning, Natural language Processing and Robotic Vision, the models behind IDP are trained to process documents the same way humans do. Actually, they do it more accurately, but let’s not speak about that over here. 😏 

It’s not an exaggeration to say that claims management is the most important aspect of any insurance company’s business. Why? There are cost and control considerations to be sure, but the biggest impact to the bottom line might be the customer satisfaction. More than 60% of customer dissatisfaction originates in the back office. This means that automating time-intensive processes like document processing, can have huge impact on insurers.  

Interesting cases

Europ Assistance is automating all incoming communications with Metamaze

Processing all these emails and letters requires a lot of time and effort. The data-entry team used to read every email and letter to decide the document type, interpret the context and forward it to the teams that need to process it further. Used to, because now they have Metamaze to automate this document processing. 

While the Belgian insurance world was upside down after storm Eunice, Keypoint had nothing to worry about

Metamaze was implemented just before storm Eunice hit Belgian territory. This storm wreaked havoc in Belgium and caused more than 500 million euro in damages. This, in turn, caused extremely high numbers of insurance claims that were submitted, creating enormous amounts of documents that needed to be processed by Keypoint. Luckily, Metamaze was integrated right before all these documents flooded Keypoint’s systems. 

What we recommend

A lot of insurance companies have handled document processing from previous storm claims manually. To prevent this from happening again next time, we suggest to get in contact with us so we can build you a proof of concept.  

Leverage the pile of documents from this claims season and use it to train the models behind our Metamaze platform. This will speed up accuracy and makes sure you can chill at the Bahamas when the next storm is going to hit ground.  

Download our free guide

Download this guide to know more about how Intelligent Document Processing can help you automate claims documents, resulting in faster and more accurate processes.

CONTACT US

Book a demo today

Curious how Metamaze works and what it can mean for your enterprise? 

data centric ai blog header

Why we build Metamaze on data-centric AI 

Why we build Metamaze on data-centric AI

From the very first start of Metamaze, we were convinced to build our platform based on data-centric AI. There is a lot to do about it right now, so we decided to write this blogpost about it.  

What is data-centric AI?

When building any Machine Learning model – and especially in an Intelligent Document Processing platform – data and models go hand in hand. In the AI-world, a lot of efforts have been made on the second part: models. Model-centric AI is about optimizing and working hard on the algorithms behind models to make sure the model is as accurate as possible on a fixed dataset. But improving your model has diminishing returns: it becomes harder and harder to keep improving.

Our entire machine learning team knows that our models are already state-of-the-art and that focusing solely on improving the model architectures, might only result in an increase of a few extra (tenths of a) percentage points in terms of accuracy. The biggest gains don’t come from the models anymore: they come from improving the data.

Our CTO Jos Polfliet usually explains it something like this:

“You can think of these models as a recipe. Most recipes are good. If you follow them, you’ll get a delicious meal. But if you make the recipe with rotten ingredients, chances are big you might not enjoy it. The same goes with AI models. If you insert bad data to train it, the outcome might not be what you expected. On the other hand, cooking with delicious, carefully grown and selected high-quality ingredients can push a good dish to savory masterpiece.”

Jos Polfliet - CTO Metamaze

From big data to good data.

When asked, “How much data do you need?”, ML engineers often wittily reply with the single word “More!”. At Metamaze, we believe the truth is a bit more nuanced. The amount of information any given document adds to the accuracy of the model is not always the same.

When you focus on data quality and variety, chances of higher accuracy rates are better.

So where do you getter better data to improve the model accuracy?

1. Carefully selecting which documents to annotate

When you put in 1000 documents to train your model, chances are that only about 40% of them will actually add valuable information to the model. For the other 60%, the model learns nothing new that is not in the valuable 40%, so adding these documents does not lead to higher accuracy. The 60% documents are documents that are already perfectly recognized by the model, so the model would not learn anything new from annotating and adding those documents to the training set. Annotating these only costs valuable human effort and increases the training time unnecessarily. So clearly, more data is not necessarily better

2. The quality of annotations.

In machine learning (sub-domain of AI), data annotation is the process of labeling data to show the outcome you want your machine model to predict. You are marking – labeling, tagging, transcribing, or processing – a dataset with the features you want your machine learning system to learn to recognize. Once your model is deployed, you want it to recognize those features on its own and decide or take some action as a result. But the way this data is annotated is crucial. Consistency is important. For some examples of common annotation mistakes click here.

How we implemented data-centric AI in Metamaze

We have integrated some specific models and features in our platform to focus solely on data quality. Let’s introduce you to some.

Optimal Document Selection Strategy

Only those documents that will improve the quality of the solution the fastest, will be pushed as a suggested annotation task. This significantly reduces the cost over 50% in terms of annotation-effort, compared to other solutions, while obtaining the same level of quality.

Want to know more about the technical details, read our previous blog about this.

Suggested review tasks

Suggested Review Tasks are tasks that are automatically created after you have trained a model. These tasks contain documents with annotations that the Metamaze A.I. believes are wrong and need to be verified. This innovative auto-correct feature is an immense time-saver to make your model perform more accurately with less human training time needed.

Want to know more technical details, read our previous blog.

Want to see these features in action?

Don’t hesitate to schedule a quick 30 min demo and we’ll give you an introduction to our platform.

Blogpost-Highlighted-image

Metamaze awarded as Tech Start-up Of The Year

Metamaze awarded as Tech Start-up Of The Year

Being rewarded as Tech Start-Up Of The Year during Datanews awards 2022 honestly was a big surprise for us. “The jury believed in our added value and thanks to a wonderful portfolio of clients we were able to bring the award home.”, tells CEO Niels Van Weereld. 

A few words from our CEO: 

“The past year and a half have been a roller coaster. Our team doubled in the last couple of months and we onboarded a lot of new big names to our client portfolio. This award is a recognition to the entire team for their hard work, but mostly for their passion that undoubtedly is reflected in the result of their work. Our mission is crystal clear: we want to eliminate mind-numbing, repetitive data-entry work and give people back the qualitative jobs they deserve. This mission is rooted deep into the foundations of our company and within the hearts of every Metamaze team member. 

The jury was also very enthusiastic about the way we developed our platform. To say it with their words: “Innovative technology that can make a huge impact on organization’s processes, packed in a low-code platform that enables everyone to put the magic of Artificial Intelligence at their fingertips.”. Of course, the validation of our platform has been proved in customer projects and the ROI and impact it has on their bottom-line and customer-employee satisfaction. But… we can’t deny that taking this award home is validation for first our platform, and secondly, the way we run our business and the hard work everyone has put into it. 

We will continue giving our all. But first, we’ll have a little celebration party with the team.”

Book a demo today

Curious how Metamaze works and what it can mean for your bank? 

Blogpost - Highlighted image

Challenges banks face in back-office automation

Banks moving from front-end to back-end transformation.

Banks have been under pressure for quite some time now. Digital transformation projects have become top-priorities for finance enterprises. But these efforts have been hugely centred around improving customer experience from a front-end point of view. Huge investments have been made on the top of the tech stack: the front end. A lot of beautiful experience have been launched in the form of portals, onboarding, experiences, mobile apps, … 

However: the back office is an integral part of the larger customer experience and should not be overlooked. 60% of customer dissatisfaction originates in the back office. The biggest part of customer centre contact points is the result of execution issues in the back office. So what’s going on? 

Banks and their manual processes

Behind the slick appearances of fancy mobile banking apps, banks’ operations are riddled with manual steps and inaccessible data. Bank’s current back offices are overly reliant on paper and manual processes.  

Of course, current processes originate from the long-standing human-centric procedures for a big part. Compliance has resulted in a lot of manual reviews and processing.  

But at Metamaze, we believe that processing documents should not require a lot of manual work. So why does it?  

Challenge 1: 80% of your data is trapped below the surface

Why do all the deposit slips, credit reports, competitive analyses, loan agreement forms, stock market reports, letters of credit, bank pre-advice, … need processing?  

One word: information. Or in modern lingo: data.  

Unless it’s blank, a document has data on it.  
(even then, one can argue no data is also data) 

Data is the new oil (yeah we know, you’ve heard this before).  

And interpreting data used to require a lot of work because of the format it comes in: structured and unstructured documents.  

Structured documents are easy peasy lemon squeezy. It’s organized in a predictable, orderly pattern. Think of spreadsheets. Structured documents are usually composed of numbers or values that make it relatively easy for OCR (optical character recognition) to extract, interpret and classify information.  

But… the hurdle comes when we’re talking about unstructured documents. Highly unsystematized, more unpredictable forms due to the variety of formats: email, chat, sensor data, IoT, video files, …  

So extracting data from unstructured documents requires effort and by necessity: it has largely become a human job, albeit a tedious one.  

In recent years, banks have been investing in technology to automate and streamline processes. But 87% of these initiatives fail.  

Eighty-seven per cent!! 🤯 

These technologies are only suited for structured documents, which is only 20% of the information large banks handle.  

So what happens with the other 80%? Bank statements, pay stubs, standard settlement instructions, … these all still require humans to manually review, sort and understand data that is largely inaccessible.  

Did you know that an average mortgage application goes through 35 manual handoffs before completion? 

Challenge 2: the variability of documents is endless

Current approaches (often including in-house solutions) all fail to move the needle for one simple reason: the variability of documents is nearly endless. Even worse: the variability creates more manual work.  

Why? Because a lot of automation solutions can’t handle the variability. If you code a template-based solution to extract data from a payment slip with lay-out A, it will not be able to extract data from the same document in lay-out B.  

Challenge 3: narrow use cases

When identifying innovation trajectories, banks are often focused on specific use cases. But it’s not enough to identify one clunky process and build/buy a tool for it. 

You will find accurate solutions, for sure. But with a very narrow use case. You don’t want to build a scattered IT landscape resulting in a patchwork of solutions.  

So considering these challenges, it’s somehow understandable that manual processes are still common place in banks. However, automation is key to survival in this hyper-competitive market and increasing customer and employee satisfaction levels. And that’s where Intelligent Document Processing comes in.  

Intelligent Document Processing: what? 🤔

Intelligent Document Processing –let’s call it IDP (time=money, right?)- is a technology-based on artificial intelligence and OCR that allows banks –like yours- to automate the data extraction from complex, unstructured documents and convert them into usable data.  

To keep things simple: an IDP-solution uses different technologies in the process to extract, interpret, categorize relevant data.  

Imagine you want to automate loan application processes

A big part of the workflow is checking and validating a huge number of documents. Before a loan can be accepted, banks need to establish a clear insight in the financials of a family or company. That means validating official documents (salary slips, contract, application forms, …).  

Banks’ talented loan experts have to perform tasks that are not particularly challenging. Does this sound like a nice job to you?  

Intelligent Document Processing helps you with that. You input the document into Metamaze and it will do the following for you:  

And then this information is outputted to your systems again. 

For sure, the reality is a bit more complex than this.  

By this moment you’re probably curious to see more?

In the next blogposts, we’re going to take a deeper dive into specific use cases for the finance industry.  Sign up for our finance newsletter to get them directly into your mailbox. ⬇️

Discover how Axa automates their loan application process

Thanks to Intelligent Document Processing, AXA manages to process the same amount of loan application documents with less than 50% of the time and effort.

Book a demo today

Curious how Metamaze works and what it can mean for your bank?