How MCPs turn prompts into agent-driven document pipelines

[00:01:00] 

Let’s get started. So, hi everybody. Thanks for joining this session. I’m Mahashree, Product Marketing Specialist at Unstract and also your host for this webinar today. Now, in this session, we are going to be tapping into a space that is very happening right at the cutting edge and could potentially revolutionize the way businesses are being done.

[00:01:20] 

So this is also where the large language models today are heading and could be considered as the next milestone. And what I’m talking about is model context protocols or MCPs. So in this webinar we’ll take a look at the advent of MCPs, their specific implications in document processing and 

[00:01:40] 

automation, and how Unstract can be enabled in MCP environments.

I’d also like to note that this is a beginner friendly webinar. So whether you are hearing about MCs for the first time or you’ve already tried them out, we’ve tried to design this session so as to provide value to everybody walking through. So here’s the agenda for today. 

00:10:15.000 –> 00:10:25.000

Everybody walking through. So here’s the agenda for today. We’ll start with MCP basics, that is, what are MCPs, how they work, and where they are headed.

00:10:25.000 –> 00:10:31.000

Following this, we’ll move on to discussing MCP implications in document processing and automation.

00:10:31.000 –> 00:10:34.000

How they fit in, where do they come to use, and all that.

00:10:34.000 –> 00:10:42.000

And then we’ll dive into an in-depth demo of the Unstract MCP server, and how this actually works using a real-world use case.

00:10:42.000 –> 00:10:52.000

And finally, in case we have any questions remaining, we’ll also be venturing into a Q&A segment, where one of our experts will be on air to answer your questions live.

00:10:52.000 –> 00:11:02.000

So that said, before I get started. Here are a few session essentials or ground rules I’d like to set before we continue into the session.

00:11:02.000 –> 00:11:06.000

So firstly, all attendees in this webinar will automatically be on mute.

00:11:06.000 –> 00:11:11.000

In case you have any questions, please do drop them in the Q&A tab that you’ll find in the bottom panel of your screen.

00:11:11.000 –> 00:11:16.000

One of us from the team will be able to get back to you with the answers via text.

00:11:16.000 –> 00:11:25.000

Now, you can also interact with fellow attendees using the chat tab. This is also where you let us know in case you run into any technical difficulties during this webinar.

00:11:25.000 –> 00:11:32.000

And as a final point, when you exit this session, you’ll be redirected to a feedback form, where I request you to leave a review on.

00:11:32.000 –> 00:11:38.000

So that said, let’s start with the big question. What exactly are MCPs?

00:11:38.000 –> 00:11:47.000

Now, to answer that, let’s rewind a little. So, large language models like ChatGPT, Claude, or Gemini have been around only for a few years.

00:11:47.000 –> 00:12:00.000

But they’ve already transformed how we get things done. So with their core functions, like question answering, reasoning, natural language understanding, summarization, and the list that you have here.

00:12:00.000 –> 00:12:08.000

They’ve made it possible to complete many tasks much faster and more efficiently using just a simple prompt.

00:12:08.000 –> 00:12:14.000

So whether you’re a marketer generating a quick email draft, a student asking for.

00:12:14.000 –> 00:12:22.000

Maybe easy dinner recipes, or a developer seeking help with code, LLMs are becoming everyday assistants for almost everybody.

00:12:22.000 –> 00:12:28.000

But here’s the catch. Despite their power, most of these interactions with LLMs.

00:12:28.000 –> 00:12:33.000

Still follow a very basic pattern. Which is you prompt the model response.

00:12:33.000 –> 00:12:48.000

And then you take it from there. This means that your… the response that the model gives you basically solves a specific part of your task, which you’ll eventually have to take and plug it into somewhere else in order to complete the.

00:12:48.000 –> 00:12:54.000

End-to-end tasks that you have in hand. So what if we are actually able to transcend this nature and go into a mode where.

00:12:54.000 –> 00:13:00.000

The model doesn’t just give you a response, but also performs action so as to complete the task faster.

00:13:00.000 –> 00:13:05.000

And that is what we are really trying to do with model context protocols.

00:13:05.000 –> 00:13:12.000

So, to understand this better, let me take the example of a marketer wanting to send out an email.

00:13:12.000 –> 00:13:17.000

So this is probably how the workflow would look. So they would first write the email.

00:13:17.000 –> 00:13:28.000

Choose a contact list to send that email to. And also schedule that email on a campaign tool like HubSpot, and finally inform the marketing team on the messaging channel.

00:13:28.000 –> 00:13:38.000

Now, so far, how we’ve been using LLMs when it comes to a task like this is that you’d probably enable LLM assistance for the drafting of the email part.

00:13:38.000 –> 00:13:53.000

So you’d probably enter some details about what email you’re drafting, what are the details you want to include, and structural instructions that you want, and you would get an email draft from the LLM as a response, over which you can start working before you send the email.

00:13:53.000 –> 00:13:57.000

Now, what if I tell you that instead of doing this.

00:13:57.000 –> 00:14:00.000

What if you are able to send a prompt that says.

00:14:00.000 –> 00:14:06.000

Send a promotional email about our August sale to the VIP customer list.

00:14:06.000 –> 00:14:11.000

Schedule it for Friday 10am, and notify the marketing team on Slack once it’s done.

00:14:11.000 –> 00:14:17.000

So when you send in this prompt. What we are looking to do is actually have the LLM.

00:14:17.000 –> 00:14:28.000

Enable this entire workflow that I have over here, not just with drafting the email, but also choosing the contact list, scheduling the email on HubSpot, and also informing the team on Slack.

00:14:28.000 –> 00:14:34.000

So what if the LLM is actually able to perform this entire heavy lifting, and just imagine the amount of.

00:14:34.000 –> 00:14:44.000

Time it frees in the hands of not just the marketer, but anybody using this tool. And this is what we are looking to achieve with MCPs.

00:14:44.000 –> 00:14:56.000

So, let’s actually understand what model context protocols are. So if I had to break it down for you, model refers to a large language model like GPT or Claude.

00:14:56.000 –> 00:15:05.000

To which we would be sending in the prompt. Context refers to all the information and context that the model needs to understand and act intelligently.

00:15:05.000 –> 00:15:13.000

So if we were to take the marketer’s case, the context would include the tools that it requires to complete the entire workflow, like HubSpot or Slack.

00:15:13.000 –> 00:15:19.000

And also information like the campaign name, email copy, scheduling time, and recipients.

00:15:19.000 –> 00:15:30.000

And finally, protocol refers to the open standard that governs how this context is passed to the model, and how the model interacts with tools and applications to get this done.

00:15:30.000 –> 00:15:41.000

So with MCPs, models don’t just respond, they take action. And because MCP is an open protocol, it can integrate with tools across your entire business stack.

00:15:41.000 –> 00:15:47.000

From email, to CRM, document processing, task management, and the list could go on.

00:15:47.000 –> 00:15:55.000

That is why MCPs are being seen by top-tier companies around the world today as the next major shift in enterprise AI.

00:15:55.000 –> 00:16:02.000

And because now you don’t just get help from the model, you also get work done by the large language model.

00:16:02.000 –> 00:16:08.000

So now that we’ve understood what MCPs are, let’s take a very brief look.

00:16:08.000 –> 00:16:18.000

As to how they work from a technical standpoint, because you will require this level of understanding if you’re looking to set up document workflows in the MCP environment.

00:16:18.000 –> 00:16:23.000

So, coming right from the beginning. The first step that you would, as a user, do is enter a prompt.

00:16:23.000 –> 00:16:29.000

Now, this prompt would be handled by a large language model in the MCP environment.

00:16:29.000 –> 00:16:34.000

Also known as the MCP client. Now, the client does two things.

00:16:34.000 –> 00:16:42.000

Firstly, it understands the task being requested. And secondly, it figures out which of the tools is needed for this task to be completed.

00:16:42.000 –> 00:16:48.000

For instance, if we were to take our older example, it would require the email platform.

00:16:48.000 –> 00:16:57.000

The messaging app, and also the campaign tool. So once that is done, once the MCP client identifies the tools that are required.

00:16:57.000 –> 00:17:06.000

It would send a request to the MCP server. Now, MCP servers are lightweight programs that act as the gateway to these external tools.

00:17:06.000 –> 00:17:11.000

They know how to interact with applications via the MCP protocol.

00:17:11.000 –> 00:17:21.000

Which basically standardizes how tools are being exposed and called. So essentially, the MCP server is what actually connects to and operates the tools on the LLM’s behalf.

00:17:21.000 –> 00:17:28.000

Whether it’s sending an email, fetching data, posting a message in Slack, and so on.

00:17:28.000 –> 00:17:34.000

So today, there are… Growing numbers of MCP servers available that support integrations with popular business tools.

00:17:34.000 –> 00:17:40.000

You can also build your own custom server to connect with your internal systems as well.

00:17:40.000 –> 00:17:50.000

And this entire operation runs in the MCP host. So, in this webinar, we’ll actually be dealing with Cloud Desktop. You also have other hosts, like Windsurf.

00:17:50.000 –> 00:17:58.000

Now, up next, we’ll be venturing into how the MCP server can actually be set up, but before that, I just want to run a quick.

00:17:58.000 –> 00:18:07.000

Poll to understand how interested you are in setting up the MCP environment, or learning more about it to adopt it in your business.

00:18:07.000 –> 00:18:15.000

So, this shouldn’t take more than a minute.

00:18:15.000 –> 00:18:21.000

So, we’d love to see if you are actively exploring MCPs, if you’re somewhat curious.

00:18:21.000 –> 00:18:30.000

Or whether you’re still learning more about them.

00:18:30.000 –> 00:18:39.000

All right, so we can see that, um… Most of you are quite interested in adopting MCPs in your business.

00:18:39.000 –> 00:18:44.000

Which is interesting to see, because this is the big next, uh, the next big thing that.

00:18:44.000 –> 00:18:56.000

Most businesses are looking out for.

00:18:56.000 –> 00:19:04.000

Alright, so hope you can see the results.

00:19:04.000 –> 00:19:14.000

All right, so in this webinar, we’ll be focusing specifically on the Unstract MCP server, which brings document extraction capabilities into the MCP ecosystem.

00:19:14.000 –> 00:19:25.000

That means that now, within the MCP environment itself. Your LLM can now access Unstract to automatically extract data from documents as part of a broader workflow.

00:19:25.000 –> 00:19:40.000

So before introducing the Unstract MCP server. Since we have a couple of you who are completely new to the platform and with us today, I thought I’ll quickly venture into a quick Unstract introduction. This shouldn’t take more than 5 minutes.

00:19:40.000 –> 00:19:54.000

So Unstract is an LLM-powered unstructured data ETL platform. If I had to briefly explain the capabilities that the platform comes with, I would have to put it into two major buckets, that is the development phase and the deployment phase.

00:19:54.000 –> 00:20:06.000

Now, in the development phase is where you would be uploading documents and also mentioning what is the data that you want extracted out of these documents, and the schema in which you want this data extracted.

00:20:06.000 –> 00:20:14.000

Through the prompts in natural language. So this… is done in a prompt engineering environment called Prompt Studio.

00:20:14.000 –> 00:20:25.000

Over here, you would also first extract the text from your documents and convert it into a format that is LLM consumable, because ultimately, LLMs would be working on these prompts and extracting the data from the documents.

00:20:25.000 –> 00:20:37.000

You also have access to various other capabilities in the Prom Studio, which could enable better accuracy or save costs. So we will look at this when we cover the demo up next.

00:20:37.000 –> 00:20:46.000

So once you have developed a Prom Studio project where you’ve uploaded a document, you have tested how the data is being extracted, you can then deploy this.

00:20:46.000 –> 00:20:56.000

Project in the four deployment options that the platform currently supports, depending on your business use case. So, if you were to get your document from a.

00:20:56.000 –> 00:21:09.000

Application, and then process this document using the, uh. Prompts you’ve given in Prom Studio, and then send out the output data to another application, you would probably want to go for an API deployment.

00:21:09.000 –> 00:21:18.000

You also have APL pipeline when you’re looking to get in the input document from a file system, process it, and send out the output to a database or a data warehouse.

00:21:18.000 –> 00:21:24.000

Task pipeline is when you’re getting the input document from a file system, you process it, and send it back.

00:21:24.000 –> 00:21:36.000

To another file system. And finally, we also support human-in-the-loop deployment, because this is also a compliance requirement in many industries that require a manual check to see how the data is actually being extracted.

00:21:36.000 –> 00:21:41.000

So this is a brief. On the capabilities that the platform comes with.

00:21:41.000 –> 00:21:47.000

To maybe throw out a few numbers that’ll give you a better idea of where Unstract stands today.

00:21:47.000 –> 00:21:54.000

We have over 5.5 plus, uh, K-plus stars on GitHub, 950 plus members Slack community.

00:21:54.000 –> 00:21:59.000

And currently, we are processing over 8 million plus pages per month by paid users alone.

00:21:59.000 –> 00:22:10.000

So, that said, let me venture into the Unstract demo. I’ll be switching screens, and um… Taking you through what, um, the platform is and how it works.

00:22:10.000 –> 00:22:17.000

So what you see over here is the abstract interface. Now, as a first-time user, what you’d have to do is set up certain prerequisites.

00:22:17.000 –> 00:22:29.000

Or connectors that you will require for you to perform operations on the platform. So since this is an LLM-driven platform, you have options to set up connectors with various popular LLMs out there.

00:22:29.000 –> 00:22:33.000

You can see that I have already set up connectors with a few.

00:22:33.000 –> 00:22:36.000

And you would also have to set up a few vector DBs.

00:22:36.000 –> 00:22:50.000

Embedding models, and finally, text extractors. Now, the text extractors over here, one of it is LLM Whisperer, which is also Unstract’s in-house text extraction tool. Now, this is a tool that is known to be.

00:22:50.000 –> 00:23:03.000

Specifically designed for LLNs, so that you’re able to extract text from your original documents in a format that is LLM ready. So you will also get to see how this works when I cover a project, a sample project, next.

00:23:03.000 –> 00:23:11.000

So once you set up these connectors, you are good to go, and you can get started with a Prom Studio project that I was talking about.

00:23:11.000 –> 00:23:21.000

Now, for the sake of time in this webinar, I’ll be getting into an existing project that I have that is the credit card parser project, so… What I have over here.

00:23:21.000 –> 00:23:31.000

Is basically a credit card statement that I am processing, and you can see I have a bunch of prompts on the left-hand side, which extract different data from this document.

00:23:31.000 –> 00:23:39.000

So, we’ll be extracting the customer name, the customer address. Spend line items from the statement, as well as the payment information.

00:23:39.000 –> 00:23:50.000

Now, the first step that is done once your document is uploaded into Prom Studio is to extract the text from this original document into a format that is LLM ready.

00:23:50.000 –> 00:23:58.000

So this is done by LLM Whisperer. And you can see how the text has been extracted while preserving the original layout.

00:23:58.000 –> 00:24:07.000

Now, this layout preservation. Of LLM Whisperer could be considered as the secret sauce behind the tool’s success, because.

00:24:07.000 –> 00:24:13.000

Elements consume information very much like humans do, so it makes sense only to.

00:24:13.000 –> 00:24:19.000

Retain the original formatting of your document so that your LLMs are able to accurately extract text from them.

00:24:19.000 –> 00:24:27.000

So you can see even details as small as a logo have been preserved while maintaining the spacing, and this is how it looks once you extract the text.

00:24:27.000 –> 00:24:34.000

And this is basically the version that you’ll be sending to the LLM for documenta- uh, for data extraction.

00:24:34.000 –> 00:24:39.000

So once this is done, and you can check this out under the raw view, you can start defining.

00:24:39.000 –> 00:24:46.000

The prompts, uh, that would throw light on the different data that you’re looking to extract from this particular document.

00:24:46.000 –> 00:24:55.000

So, in this case, for instance, I’m looking to extract the customer name, and I’ve also given instructions on what is the schema for its extraction. So I want the first letter of each name to be capitalized.

00:24:55.000 –> 00:25:05.000

So, similarly, you can see in the second prompt, I have a JSON that I want as the output. I’ve also given instructions on how I want this to be structured.

00:25:05.000 –> 00:25:11.000

And you also have various output options for you to choose from. Different data type for your output to be in.

00:25:11.000 –> 00:25:19.000

So this is what is possible in Prom Studio, and I’m just briefly running through this, actually. You have access to, um, more.

00:25:19.000 –> 00:25:26.000

Other capabilities over here in… under Prom Studio Settings. So these are all accuracy-enabling or cost-saving features.

00:25:26.000 –> 00:25:37.000

Which are available on the documentation, and we’ve actually done specific webinars on these features as well. So I’ll ask my team to drop the links to those webinars and documentation in chat.

00:25:37.000 –> 00:25:39.000

So you can check it out when you have the time.

00:25:39.000 –> 00:25:47.000

So, this is what the Prom Studio can do overall, and you can also upload multiple documents for you to check out how this works.

00:25:47.000 –> 00:25:54.000

So, in this case, we are looking at the credit card statement for American Express. So, if I just had to show you how another.

00:25:54.000 –> 00:26:11.000

Statement would look. You can see that we have a different credit card statement, and even though it’s from a different bank, and it has a different layout altogether, the output is, again, being extracted perfectly well, and the prompts are generic in nature, so you don’t have to alter the prompts for each of your documents.

00:26:11.000 –> 00:26:19.000

Uh, just because the layout changes. So this would wrap up the capabilities under Prom Studio that I wanted to go into today.

00:26:19.000 –> 00:26:28.000

So, once you’re happy with this project, you can export this as a tool, and you would have to deploy this tool in one of the four deployment options we’d spoken about.

00:26:28.000 –> 00:26:40.000

For instance, uh, over here, we are deploying an API, so the input configuration is an API, the output configuration is an API, and all you have to do is basically.

00:26:40.000 –> 00:26:48.000

Uh, you have all the exported tools available over here in the tools pane, so you just have to drag and drop the required tool that you need into the.

00:26:48.000 –> 00:26:54.000

Workflow chain over here. So, in this case, we just have one tool, but you can create complex workflows by dragging and dropping.

00:26:54.000 –> 00:27:04.000

More number of tools as required. So this is an API deployment, and once this is deployed, you can check this out under API deployments, you have.

00:27:04.000 –> 00:27:09.000

An endpoint over here, and also a downloadable Postman collection that you can use.

00:27:09.000 –> 00:27:13.000

So similarly, if I had to set up an ETL pipeline.

00:27:13.000 –> 00:27:26.000

My input configuration would be a file system. And we’ve given the same tool over here, and you can see that I’ve also specified a database to which I want the output data to go. So this is how you would deploy the prompts to your project.

00:27:26.000 –> 00:27:32.000

And that wraps up what we wanted to cover today, to give you a quick idea of what the platform does.

00:27:32.000 –> 00:27:41.000

So now let’s go back to the, um, presentation, and we’ll now see how the Unstract MCP server works, and how this.

00:27:41.000 –> 00:27:46.000

Could be deployed in the MCP environment.

00:27:46.000 –> 00:27:53.000

Now, the deployment options that you had seen earlier could be useful for pretty straightforward use cases.

00:27:53.000 –> 00:28:09.000

But in enterprise environments, document workflows are rarely that simple. You might need more complex routing, advanced orchestration, and coordination across multiple tools in order to complete the end-to-end document extraction cycle. So that is where these traditional.

00:28:09.000 –> 00:28:15.000

Deployment methods might fall short. So, in order to fit into these advanced scenarios.

00:28:15.000 –> 00:28:32.000

Unstract provides ways to do this. One way is that you can actually implement Unstract in agentic workflows, like n8n, that we’ve actually done a webinar separately on that. And the other one that we are currently supporting is MCP, which is also.

00:28:32.000 –> 00:28:39.000

Being rapidly adopted and looked into. So… We’ll see how MCPs offer the flexibility.

00:28:39.000 –> 00:28:44.000

And, um, the intelligence to basically connect to multiple tools and easily.

00:28:44.000 –> 00:28:53.000

Um, get started with workflows and complete tasks. So, currently, we’re also offering a separate MCP server for LLM Whispera, because.

00:28:53.000 –> 00:28:59.000

LLMWhisperer is also available as a standalone tool. So now, let’s take a look at.

00:28:59.000 –> 00:29:08.000

What are these use cases that would probably call for an Unstract MCP server or an LLM visible MCP server?

00:29:08.000 –> 00:29:16.000

So in this webinar, I’ve taken a real-world example, which is quite complex. So, we would be parsing invoices in this webinar.

00:29:16.000 –> 00:29:21.000

And, uh, just to give you a quick idea of the problem statement that we are dealing with.

00:29:21.000 –> 00:29:27.000

Let’s say that an enterprise is, uh, processing multiple invoices per day.

00:29:27.000 –> 00:29:31.000

So, if you were not to automate this, if this was being done manually.

00:29:31.000 –> 00:29:37.000

You would be receiving these invoices in unstructured formats, it could be time-consuming for manual data entry.

00:29:37.000 –> 00:29:51.000

It would be difficult to prioritize high… urgent or high-value invoices, and worse, these urgent or high-value invoices could actually fall through, and you might not be able to get to them within the due date or within.

00:29:51.000 –> 00:30:00.000

Time, so you also risk that. And, um, let’s see how… the workflow is designed to actually handle the challenges that you’ve just seen.

00:30:00.000 –> 00:30:06.000

So in this workflow, I’ve, uh, basically designed it to get invoices from.

00:30:06.000 –> 00:30:14.000

The email or the inbox. And what we will be doing next is download all the invoices received via email.

00:30:14.000 –> 00:30:26.000

And if these invoices are valid, it would be sent to an Unstract API that extracts the required invoice data. So this could be data like the invoice number, the vendor name.

00:30:26.000 –> 00:30:49.000

The due date, the due amount, and so on. So just to show you what exactly we are looking to extract, let me take you over to the abstract Prom Studio project, which is actually… which will be deployed as an API in this space.

00:30:49.000 –> 00:31:04.000

So what you see over here is basically the Prom Studio project, which I’ve deployed… which I will be deploying as an API in the MCP environment as well. So you have the prompt over here, we are extracting invoice number, the invoice date.

00:31:04.000 –> 00:31:11.000

Due date, vendor details, the total amount, invoice status, and so on. So this is basically the, um.

00:31:11.000 –> 00:31:19.000

Various data that we’re collecting, and we would also be populating a Google Sheet for each day with the vendor name.

00:31:19.000 –> 00:31:21.000

The vendor ID, the due amount, as well as the due date.

00:31:21.000 –> 00:31:32.000

So, once we extract the data, this sheet would be populated, and moving on from there, we would be creating a new column on the sheet to calculate the days until the invoice is due.

00:31:32.000 –> 00:31:43.000

And depending on that, if the days until the invoice is due is less than 3 days, the finance team would be messaged on Slack, and this invoice would be marked as urgent.

00:31:43.000 –> 00:31:46.000

If the due amount is, let’s say, greater than $9K, then.

00:31:46.000 –> 00:31:53.000

A separate manager’s channel would be notified. And finally, if there are any missing values or errors in the invoice.

00:31:53.000 –> 00:32:03.000

An error channel will also be notified. So this entire workflow is designed to be run in the MCP environment. We’ll be using Cloud Desktop, as I mentioned earlier.

00:32:03.000 –> 00:32:10.000

So, for this to run, we would be using the Zapier MCP server for Google Sheets.

00:32:10.000 –> 00:32:15.000

Gmail and Slack, and the Unstract MCP server for accessing the Unstract tool.

00:32:15.000 –> 00:32:25.000

So this, again, gives you an overview of how this setup is done. So again, we’re hosting on Claude Desktop. The MCP client in this case is Claude Sonnet 4.

00:32:25.000 –> 00:32:29.000

And the MCP servers we’re using is the abstract MCP server, as I mentioned earlier, to access.

00:32:29.000 –> 00:32:37.000

Unstract in the NCP environment, and for the other tools that I have over here, which is Gmail, Google Sheets, and Slack, we’ll be, again, using.

00:32:37.000 –> 00:32:45.000

The Zapier MCP server. So that said, let’s move on into the demo.

00:32:45.000 –> 00:32:51.000

All right, so what you have over here is basically the, uh, Cloud desktop, and I have a couple of chats.

00:32:51.000 –> 00:32:56.000

So this is where I have already run the invoice extraction since we were processing batch.

00:32:56.000 –> 00:33:02.000

Batches of invoices and to save up on time during this session, I already have it run over here.

00:33:02.000 –> 00:33:10.000

So what you could do is explore the settings, and this is where, maybe under the developer feature, this is where I have set up, um, the various.

00:33:10.000 –> 00:33:16.000

Tools and the servers that are running, so you have the file system server over here to access.

00:33:16.000 –> 00:33:24.000

The local files, the LLM whispera Server NCP server is also set up, as well as the Unstract NCP server.

00:33:24.000 –> 00:33:45.000

So you can always check this out over here. And… So moving back, you also have various… the various tools that are available that you can see, uh, over here in the tools pane. So, currently, we’ve also integrated with, along with Unstract, LLM, this Pro, and File System, we’ve also integrated with.

00:33:45.000 –> 00:33:57.000

Zapier, which would offer various tools from Slack, so each of these tools would perform a specific function. So right from creating a private channel to retrieving messages, to deleting messages, you have special functions for each of these actions.

00:33:57.000 –> 00:34:03.000

And we’ve integrated with Google Sheets, so you have various functions, um, for that as well.

00:34:03.000 –> 00:34:11.000

With Google Drive, and finally with Gmail. So these are the different tools that I’ve integrated with using the Zapier MCT server.

00:34:11.000 –> 00:34:21.000

So, let’s go through the prompt and how I’ve been able to, um, implement the workflow that I had just taken… that we’d just taken a look at.

00:34:21.000 –> 00:34:29.000

So the first prompt that I have over here is download the attachments from all the emails as PDFs and, uh, which were.

00:34:29.000 –> 00:34:35.000

Which were received in my inbox today, and we are storing it in a specific folder on the system.

00:34:35.000 –> 00:34:48.000

So you can see how Claude is actually giving you a step-by-step narrative on how it went about doing… performing this particular action. And you can also inspect each of these actions and the responses that Claude has given to you.

00:34:48.000 –> 00:34:54.000

So this is possible for each of these stages. You can see that it is first reading the email.

00:34:54.000 –> 00:34:59.000

And then downloading the attachments in the email in case they are actually invoices.

00:34:59.000 –> 00:35:07.000

And you can see that it has finally given you a result that 4 invoices were found, and each of them have been downloaded into the local system.

00:35:07.000 –> 00:35:13.000

So let me just take you to the email, and I’ll show you, um… what these invoices are as well.

00:35:13.000 –> 00:35:19.000

So you can see that I have four. Um, uh, emails over here in my inbox.

00:35:19.000 –> 00:35:32.000

And each… these are the invoices that we had given. So, uh, these are sample invoices that we had created for the webinar. You can see that some of them would be… their value would be greater than.

00:35:32.000 –> 00:35:42.000

$9,000 so that those invoices would actually have to go and fall into the manager’s Slack channel. Some of them would be due… some of them would have missing data, for in this case, you have the due date missing.

00:35:42.000 –> 00:35:45.000

You could have the due date in less than 3 days.

00:35:45.000 –> 00:35:54.000

And, um… Yeah, so that… over here, we have the total amount greater than $9K, so this is the invoice that should ideally go into the.

00:35:54.000 –> 00:35:59.000

Manager’s channel, and over here, we have the due date that is.

00:35:59.000 –> 00:36:04.000

Uh, due in less than 3 days.

00:36:04.000 –> 00:36:09.000

All right. So that said, let me go back to the.

00:36:09.000 –> 00:36:19.000

Chat over here, and once you have all these documents saved into your system, we are then calling the Unstract tool, as well as the LLM Whisperer tool.

00:36:19.000 –> 00:36:24.000

To, uh, process each of these invoices downloaded. So you can, again, inspect the.

00:36:24.000 –> 00:36:31.000

Response for each of these, and how it has actually worked. So you can see how the data has been extracted as text.

00:36:31.000 –> 00:36:37.000

Across each of these documents, when you take a look at the response that is given by the Unstract tool.

00:36:37.000 –> 00:36:43.000

And finally, once the data is extracted as you want it across all the four documents.

00:36:43.000 –> 00:36:48.000

You get a summary of the data that was extracted over here.

00:36:48.000 –> 00:36:55.000

So, moving on from here, I had given another prompt. So this is the part where we create the Google Sheet.

00:36:55.000 –> 00:37:05.000

And also, uh, add the relevant data items. So, over here, I’ve specified that it should create a Google Sheet with today’s date as the name, and the data items that would be added would be.

00:37:05.000 –> 00:37:13.000

The invoice number, the vendor name, the due date, as well as the due amount. And we’re also giving appropriate column names for each of these columns.

00:37:13.000 –> 00:37:20.000

And finally, if any values are missing, we’re again marking that particular cell as null. And finally.

00:37:20.000 –> 00:37:26.000

We’re also adding another column to the sheet, so I’m asking you to perform the function and calculate the days until which the invoice is due.

00:37:26.000 –> 00:37:33.000

Based on the current date. So again, you can inspect each of these, uh, each of the actions performed by the.

00:37:33.000 –> 00:37:42.000

LLM, so the Google Sheets tool is now being called. You can see that it’s created a spreadsheet over here, and next, it’s updating the rows in this particular spreadsheet.

00:37:42.000 –> 00:37:51.000

And each of this is also, um, given along with narratives so that you are able to understand what the model is doing, uh.

00:37:51.000 –> 00:38:00.000

With, uh, respect to this particular task. So once the Google Sheet is created, you also get the sheet URL over here that you can open.

00:38:00.000 –> 00:38:06.000

And just to show you how this looks, let me take you to the Google Sheet that I had created.

00:38:06.000 –> 00:38:22.000

So since I had run this yesterday, we have yesterday’s date over here, and we have the four, uh, columns that I’d asked for. So we have the invoice number, the vendor name, as well as the due date and the due amount, and you can see the days until due is also calculated.

00:38:22.000 –> 00:38:41.000

So once this is done, we will next be, um… performing the function where we send out the notifications to the different teams on Slack, so we’re asking the model to consider all of the amount to be in dollars, and if the amount is greater than $9K, then it should be sent to the invoic Manager.

00:38:41.000 –> 00:38:48.000

Slack channel saying that a new invoice is received. And if the days until due is less than.

00:38:48.000 –> 00:38:54.000

Three days, then a message should be sent marking the invoice as urgent to the invoice finance team.

00:38:54.000 –> 00:39:00.000

And if any of the data, like the vendor name, the due date, or the due amount is missing.

00:39:00.000 –> 00:39:14.000

Then this would be reported to the invoice error channel. So once I set this, I’m also finally downloading the Google Sheet that was created, and it would be sent to the invoice finance team Slack channel, along with the message, today’s invoices.

00:39:14.000 –> 00:39:18.000

So you can see how this is, again, being… we call the Slack tool for this.

00:39:18.000 –> 00:39:24.000

And, um, this entire… process is being done automatically.

00:39:24.000 –> 00:39:28.000

And if I just had to show you how this reflects on Slack.

00:39:28.000 –> 00:39:43.000

We have the invoice manager Slack channel over here, and you can see that I’ve gotten a notification that a new invoice has been received along with the due amount, the days until this invoice is due, and the invoice number. So similarly, we have other notifications for previously received invoices over here.

00:39:43.000 –> 00:39:56.000

Under the Invoice Finance Team channel, we again have. Urgent invoices notified. And finally, we also have the report that is sent in the, uh, in this channel as well, which.

00:39:56.000 –> 00:39:59.000

Gives you an idea of all the invoices received on a particular day.

00:39:59.000 –> 00:40:05.000

So you can see that the model is also giving you a summary of what were the invoices and, um.

00:40:05.000 –> 00:40:16.000

What is actually present in this report? And finally, in the invoice error channel, you can see that we’ve gotten notification on a particular invoice where the date was missing. And we also have.

00:40:16.000 –> 00:40:21.000

Intimation on which of the values was missing, which is the due date. So this is basically.

00:40:21.000 –> 00:40:37.000

What you can perform using the MCP environment, and in… in this case, I don’t really have to log into any of the platforms that we just saw, so I don’t have to manually log into Slack or Google Sheets, or my Gmail account.

00:40:37.000 –> 00:40:43.000

All I have to do is enter prompts, it’s as easy as that, and this entire process is done automatically.

00:40:43.000 –> 00:40:50.000

So this is the… Um, benefit that we’re looking to get with the MCP environment.

00:40:50.000 –> 00:40:59.000

And this is what we are actually heading towards next. So that brings me to the end of the demo segment.

00:40:59.000 –> 00:41:05.000

Let me move back to the presentation. So as a final note to wrap up.

00:41:05.000 –> 00:41:14.000

Ncps are not just a tool, they actually are now representing a fundamental shift in how we interact with AI.

00:41:14.000 –> 00:41:23.000

So by bringing orchestration capabilities into large language models. Mcps are now allowing you to move beyond isolated prompts.

00:41:23.000 –> 00:41:30.000

And this actually means faster processing, fewer manual handoffs, and also tighter integration with business systems.

00:41:30.000 –> 00:41:40.000

Because currently, you might have LLMs working separately with different business apps, but how do you bring this all together in one hood? So that is what MCP makes.

00:41:40.000 –> 00:41:46.000

Really easy, where it’s able to bring together fragmented integrations that you might have running separately.

00:41:46.000 –> 00:41:59.000

And as any ecosystem around MCP evolves, this is also at an early stage, so it’s a unique opportunity to shape the standards, build future-ready infrastructure, and also stay ahead of the curve.

00:41:59.000 –> 00:42:02.000

So, that said, we’ve arrived at the conclusion of this webinar.

00:42:02.000 –> 00:42:10.000

And we’d be moving into a demo segment. I mean, we’d be moving into a Q&A segment, in case we have any questions.

00:42:10.000 –> 00:42:24.000

And, um, I don’t also like to announce that you can also sign up for a free personalized demo to understand the unstruct MCP server better. I’ve tried to put together a very general demo to show you how this works briefly.

00:42:24.000 –> 00:42:30.000

But you can sit with one of our experts and see how your business… what are your business needs, and how this can be customized for your particular business.

00:42:30.000 –> 00:42:38.000

And you can also sign up for a demo if you are simply looking to explore Unstract better, and… or LLM Whisperer for your business.

00:42:38.000 –> 00:42:42.000

So I’ll just run a quick poll. So, if you are interested, you can.

00:42:42.000 –> 00:43:02.000

Sign up for this demo as well.

00:43:02.000 –> 00:43:25.000

So while I run that poll, uh, we can also venture into the Q&A in case we have any questions remaining.

00:43:25.000 –> 00:43:28.000

All right, thank you folks. Looks like we don’t have any questions left.

00:43:28.000 –> 00:43:36.000

Uh, thank you so much for joining this webinar today. It was great to have you, and we really look forward to having you at our upcoming events as well.

00:43:36.000 –> 00:43:47.000

Thank you so much.