by Niklas Hatje
Use Case This workflow is beneficial when you're automatically adding new leads to your Pipedrive CRM. Usually, you'd have to manually review each lead to determine if they're a good fit. This process is time-consuming and increases the chances of missing important leads. This workflow ensures every new lead is promptly evaluated upon addition. What this workflow does The workflow runs every 5 minutes. On every run, it checks your new Pipedrive leads and enriches them with Clearbit. It then marks items as enriched and checks if the company of the new lead matches certain criteria (in this case if they are B2B and have more than 100 employees) and sends a Slack alert to a channel for every match. Pre Conditions You must have Pipedrive, Clearbit, and Slack accounts. You also need to set up the custom fields Domain and Enriched at in Pipedrive. Setup Go to Company Settings -> Data fields -> Organization and add Domain as a custom field Go to Company Settings -> Data fields -> Leads and add Enriched at as a custom date field Add your Pipedrive, Clearbit and Slack credentials. Fill the setup node below. To get the ID of your custom domain fields, simply run the Show only custom organization fields and Show only custom lead fields nodes below and copy the keys of your domain, and enriched at fields. How to adjust this workflow to your needs Modify the criteria to suit your definition of an interesting lead. If you only want to focus on interesting leads in Pipedrive, add a node that archives all others. This workflow was built using n8n version 1.29.1
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to build an AI-powered meeting assistant that provides real-time transcription and insights during virtual meetings. Youtube Link Who is this for? This workflow is ideal for business professionals, project managers, and team leaders who require effective transcription of meetings for improved documentation and note-taking. It's particularly beneficial for those who conduct frequent virtual meetings across various platforms like Zoom and Google Meet. What problem does this workflow solve? Transcribing meetings manually can be tedious and prone to error. This workflow automates the transcription process in real-time, ensuring that key discussions and decisions are accurately captured and easily accessible for later review, thus enhancing productivity and clarity in communications. What this workflow does The workflow employs an AI-powered assistant to join virtual meetings and capture discussions through real-time transcription. Key functionalities include: Automatic joining of meetings on platforms like Zoom, Google Meet, and others with the ability to provide real-time transcription. Integration with transcription APIs (e.g., AssemblyAI) to deliver seamless and accurate capture of dialogue. Structuring and storing transcriptions efficiently in a database for easy retrieval and analysis. Real-Time Transcription: The assistant captures audio during meetings and transcribes it in real-time, allowing participants to focus on discussions. Keyword Recognition: Key phrases can trigger specific actions, such as noting important points or making prompts to the assistant. Structured Data Management: The assistant maintains a database of transcriptions linked to meeting details for organized storage and quick access later. Setup Preparation Create Recall.ai API key Setup Supabase account and table create table public.data ( id uuid not null default gen_random_uuid (), date_created timestamp with time zone not null default (now() at time zone 'utc'::text), input jsonb null, output jsonb null, constraint data_pkey primary key (id), ) tablespace pg_default; Create OpenAI API key Development Bot Creation: Use a node to create the bot that will join meetings. Provide the meeting URL and set transcription options within the API request. Authentication: Configure authentication settings via a Bearer token for interacting with your transcription service. Webhook Setup: Create a webhook to receive real-time transcription updates, ensuring timely data capture during meetings. Join Meeting: Set the bot to join the specified meeting and actively listen to capture conversations. Transcription Handling: Combine transcription fragments into cohesive sentences and manage dialog arrays for coherence. Trigger Actions on Keywords: Set up keyword recognition that can initiate requests to the OpenAI API for additional interactions based on captured dialogue. Output and Summary Generation: Produce insights and summary notes from the transcriptions that can be stored back into the database for future reference.
by Jay Emp0
🤖 MCP Personal Assistant Workflow Description This workflow integrates multiple productivity tools into a single AI-powered assistant using n8n, acting as a centralized control hub to receive and execute tasks across Google Calendar, Gmail, Google Drive, LinkedIn, Twitter, and more. ✅ Key Capabilities AI Agent + Tool Use**: Built using n8n's AI Agent and MCP system, enabling intelligent multi-step reasoning. Tool Integration**: Google Calendar: schedule, update, delete events Gmail: search, draft, send emails Google Drive: manage files and folders LinkedIn & Twitter: post updates, send DMs Utility tools: fetch date/time, search URLs Discord Input**: Accepts prompts via n8n_discord_trigger_bot repo link 🛠 Setup Instructions Timezone Configuration: Go to Settings > Default Timezone in n8n. Set to your local timezone (e.g., Asia/Jakarta). Ensure all Date & Time nodes explicitly use the same zone to avoid UTC-related bugs. Tool Authentication: Replace all OAuth credentials for: Gmail Google Drive Google Calendar Twitter LinkedIn Use your own accounts when copying this workflow. Platform Adaptability: While designed for Discord, you can replace the Discord trigger with any other chat or webhook service. Example: Telegram, Slack, WhatsApp Webhook, n8n Form Trigger, etc. 📦 Strengths Great for document retrieval, email summarization, calendar scheduling, and social posting. Reduces the need for tab-switching across multiple platforms. Tested with a comprehensive checklist across categories like: Calendar Gmail Google Drive Twitter LinkedIn Utility tools Cross-tool actions (Refer to discordGPT prompt checklist for prompt coverage.) ⚠️ Limitations ❌ Binary Uploads: AI agents & MCP server currently struggle with binary payloads. Uploading files to Gmail, Google Drive, or LinkedIn may fail due to format serialization issues. Binary operations (upload/post) are under development and will be fixed in future iterations. ❌ Date Bugs: If timezone settings are incorrect, event times may default to UTC, leading to misaligned calendar events. 🔬 Testing Use the provided prompt checklist for full coverage of: ✅ Core feature flows ✅ Edge cases (e.g., invalid dates, nonexistent users) ✅ Cross-tool chains (e.g., Google Drive → Gmail → LinkedIn) ✅ MCP Assistant Test Prompt Checklist 📅 Google Calendar [X] "Schedule a meeting with Alice tomorrow at 10am. and send an invite to alice@wonderland.com" [X] "Create an event called 'Project Sync' on Friday at 3pm with Bob and Charlie." [X] "Update the time of my call with James to next Monday at 2pm." [X] "Delete my meeting with Marketing next Wednesday." [x] "What is my schedule tommorow ? " 📧 Gmail [x] "Show me unread emails from this week." [x] "Search for emails with subject: invoice" [X] "Reply to the latest email from john@company.com saying 'Thanks, noted!'" [X] "Draft an email to info@a16z.com with subject 'Emp0 Fundraising' and draft the body of the email with an investment opportunity in Emp0, scrape this site https://Emp0.com to get to know more about emp0.com" [X] "Send an email to hi@cursor.com with subject 'Feature request' and cc sales@cursor.com" [ ] "Send an email to recruiting@openai.com , write about how you like their product and want to apply for a job there and attach my latest CV from Google Drivce" 🗂 Google Drive [ ] "Upload the PDF you just sent me to my Google Drive." [X] "Create a folder called 'July Reports' inside Emp0 shared drive." [X] "Move the file named 'Q2_Review.pdf' to 'Reports/2024/Q2'." [X] "Share the folder 'Investor Decks' with info@a16z.com as viewer." [ ] "Download the file 'Wayne_Li_CV.pdf' and attach it in Discord." [X] "Search for a file named 'Invoice May' in my Google Drive." 🖼 LinkedIn [X] "Think of a random and inspiring quote. Post a text update on LinkedIn with the quote and end with a question so people will answer and increase engagement" [ ] "Post this Google Drive image to LinkedIn with the caption: 'Team offsite snapshots!'" [X] "Summarize the contents of this workflow and post it on linkedin with the original url https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/" 🐦 Twitter [X] "Tweet: 'AI is eating operations. Fast.'" [X] "Send a DM to @founderguy: 'Would love to connect on what you’re building.'" [X] "Search Twitter for keyword: 'founder advice'" 🌐 Utilities [X] "What time is it now?" [ ] "Download this PDF: https://ontheline.trincoll.edu/images/bookdown/sample-local-pdf.pdf" [X] "Search this URL and summarize important tech updates today: https://techcrunch.com/feed/" 📎 Discord Attachments [ ] "Take the image I just uploaded and post it to LinkedIn." [ ] "Get the file from my last message and upload it to Google Drive." 🧪 Edge Cases [X] "Schedule a meeting on Feb 30." [X] "Send a DM to @user_that_does_not_exist" [ ] "Download a 50MB PDF and post it to LinkedIn" [X] "Get the latest tweet from my timeline and email it to myself." 🔗 Cross-tool Flows [ ] "Get the latest image from my Google Drive and post it on LinkedIn with the caption 'Another milestone hit!'" [ ] "Find the latest PDF report in Google Drive and email it to investor@vc.com." [ ] "Download an image from this link and upload it to my Google Drive: https://example.com/image.png" [ ] "Get the most recent attachment from my inbox and upload it to Google Drive." Run each of these in isolated test cases. For cross-tool flows, verify binary serialization integrity. 🧠 Why Use This Workflow? This is an always-on personal assistant that can: Process natural language input Handle multi-step logic Execute commands across 6+ platforms Be extended with more tools and memory If you want to interact with all your work tools from a single prompt—this is your base to start. 📎 Repo & Credits Discord bot trigger: n8n_discord_trigger_bot Creator: Jay (Emp₀)
by Zacharia Kimotho
This workflow makes it easier to prepare for meetings and calls by researching your lead right before the call and creates a high-level meeting prep that is sent to your email. This removes the extra steps needed by teams to learn their leads, research, and prepare for the upcoming calls. How does it work This workflow starts when We Capture the webhook from cal.com for new bookings. Ensure you have a field on the form to collect LinkedIn posts. This can be optional or mandatory depending on your preferences. When a new event is booked, we will add the leads to an Airtable CRM for appointments and new bookings. This table will contain all the items and items needed to enrich and maintain your CRM. If the lead has linkedin then we do research on LinkedIn for their content and posts and perform a lead enrichment to get as much info as we can about the leads and create a new meeting prep. What you need Bright data API Cal.com account/calendar. Other calendars can be used too for this eg calendly, Google Calendar, etc with a few tweaks CRM - This can be anything not just airtable Setting it up Create/update your calendar to allow collecting users LinkedIn profiles/bios Add a new webhook to and subscribe to the desired events like below Map the fields from the webhook to match your CRM. If you have no CRM make a copy of this Airtable CRM and map the fields to your account. We will be using the Base and table ID to make the mapping easier Setup your Bright Data API and select the data source as linkedin for the scraping You can edit more data on the bio as needed Update this info to the CRM under the table lead enrichment and map accordingly You can update the prompt on the AI models or work with them as is. Update the Gmail node to send the meeting preps to you and finally update the CRM with the generated Meeting prep This automated process can save your team a couple of minutes each day otherwise spent on other client fulfillment items. If you would like to learn more about n8n templates like this, feel free to reach out via Linkedin Happy productivity!!
by Automate With Marc
📬 What This Workflow Does This workflow automatically scrapes recent high-value congressional stock trades from Quiver Quantitative, summarizes the key transactions, and delivers a neatly formatted report to your inbox — every single day. It combines Firecrawl's powerful content extraction, OpenAI's GPT formatting, and n8n's automation engine to turn raw HTML data into a digestible, human-readable email. Watch Full Tutorial on how to build this workflow here: https://www.youtube.com/watch?v=HChQSYsWbGo&t=947s&pp=0gcJCb4JAYcqIYzv 🔧 How It Works 🕒 Schedule Trigger Fires daily at a set hour (e.g., 6 PM) to begin the data pipeline. 🔥 Firecrawl Extract API (POST) Targets the Quiver Quantitative “Congress Trading” page and sends a structured prompt asking for all trades over $50K in the past month. ⏳ Wait Node Allows time for Firecrawl to finish processing before retrieving results. 📥 Firecrawl Get Result API (GET) Retrieves the extracted and structured data. 🧠 OpenAI Chat Model (GPT-4o) Formats the raw trading data into a readable summary that includes: Date of Transaction Stock/Asset traded Amount Congress member’s name and political party 📧 Gmail Node Sends the summary to your inbox with the subject “Congress Trade Updates - QQ”. 🧠 Why This is Useful Congressional trading activity often reveals valuable signals — especially when high-value trades are made. This workflow: Saves time manually tracking Quiver Quant updates Converts complex tables into a daily, readable email Keeps investors, researchers, and newsrooms in the loop — hands-free 🛠 Requirements Firecrawl API Key (with extract access) OpenAI API Key Gmail OAuth2 credentials n8n (self-hosted or cloud) 💬 Sample Output: Congress Trade Summary – May 21 Nancy Pelosi (D) sold TSLA for $85,000 on April 28 John Raynor (R) purchased AAPL worth $120,000 on May 2 ... and more 🪜 Setup Steps Add your Firecrawl, OpenAI, and Gmail credentials in n8n. Adjust the schedule node to your desired time. Customize the OpenAI system prompt if you want a different summary style. Deploy the workflow — and enjoy your daily edge.
by Oskar
With this workflow you can extract data from resume documents uploaded via a Telegram bot. Workflow transform readable content of PDF resume into structured data, using AI nodes and returns PDF with formatted, plain HTML. You can modify this workflow to perform other actions with structured data (e.g. insert it into database or create other, well-formatted documents). Functionality of this workflow was presented during the n8n community call on March 7, 2024 - recording of presentation available here. ⚠️ Workflow made for demo purposes. If you want to use it in real life, please make sure necessary measures for personal data protection are set. How it works? User uploads readable PDF resume document into Telegram bot. After authentication based on chat ID parameter, workflow extracts text from the PDF and transfers it into AI chain with connected sub-nodes: OpenAI Chat Model and Structured Output (JSON) Parser. Then, each extracted section (employment history, projects etc.) is formatted into desired HTML structure. Finally, the document is converted into new, structured PDF using Gotenberg. 💡 This workflow requires installed Gotenberg. If you are not familiar with this software, please have a look on my YouTube tutorial. You can also replace call to Gotenberg with other PDF generation service (such as PDFMonkey or ApiTemplate). Set up steps Create Telegram bot and add its credentials in n8n. Set your chat ID parameter in Auth node. Adjust JSON schema in Structured Output Parser according to your needs. Optionally: replace HTTP call to Gotenberg with PDF generation service of your choice. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Mahmoud Ashraf
This workflow automatically creates in-depth, SEO-friendly Arabic articles based on any keyword you provide. It researches the topic, generates a full article outline, writes every section in Arabic, and saves the final article directly to your Notion workspace—all in a few clicks. How It Works Step 1:** You submit a simple web form with your keyword and (optionally) an article title. Step 2:** The workflow researches the topic using advanced AI, gathers trending questions from Google, and creates a detailed, structured outline. Step 3:** Each section of the article is written in Arabic by AI, following best SEO practices and including real FAQs. Step 4:** The completed article is automatically formatted and saved to your Notion database, ready for review or publishing. Setup Instructions What you need:** An OpenAI API key (for AI-powered writing and outline generation) An OpenRouter API key (for research via Perplexity/Sonar AI) A Notion account and Notion API integration (for saving articles) DataForSEO account (for fetching Google "People Also Ask" questions) How to set up:** Import the workflow into your n8n instance. Connect your API credentials for OpenAI, OpenRouter, Notion, and (optionally) DataForSEO. Update your Notion database ID in the workflow settings. Deploy the workflow. Fill out the web form to generate your first article. Setup time:** 10–20 minutes if you already have your accounts. Tip: You can fully customize the outline and writing prompts for your target audience or topic. The workflow is modular—easy to adapt for different languages or content styles.
by Jimleuk
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading. Disclaimer This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes. How it works A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events. When a file is created, its contents are uploaded to the vector store. When a file is updated, its previous records are replaced. When the file is deleted, the corresponding records are also removed from the vector store. A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system. Requirements A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem. Qdrant instance to store the records. Customising the workflow This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
by Jimleuk
This n8n workflow demonstrates a simple multi-agent setup to perform the task of competitor research. It showcases how using the HTTP request tool could reduce the number of nodes needed to achieve a workflow like this. How it works For this template, a source company is defined by the user which is sent to Exa.ai to find competitors. Each competitor is then funnelled through 3 AI agents that will go out onto the internet and retrieve specific datapoints about the competitor; company overview, product offering and customer reviews. Once the agents are finished, the results are compiled into a report which is then inserted in a notion database. Check out an example output here: https://jimleuk.notion.site/2d1c3c726e8e42f3aecec6338fd24333?v=de020fa196f34cdeb676daaeae44e110&pvs=4 Requirements An OpenAI account for the LLM. Exa.ai account for access to their AI search engine. SerpAPI account for Google search. Firecrawl.dev account for webscraping. Notion.com account for database to save final reports. Customising the workflow Add additional agents to gather more datapoints such as SEO keywords and metrics. Not using notion? Feel free to swap this out for your own database.
by Łukasz
Who is it for If you are a postmaster or you manage email server, you can set up DKIM and SPF records to ensure that spoofing your email address is hard. On your domain you can also set up DMARC record to receive XML reports from email providers (rua tag). Those reports contain data if email they received passed DKIM and SPF verifications. Since DMARC email is public, you will receive a lot of emails from email providers, not only if DKIM/SPF fail. There is no need for it - you probably only need to know if SPF/DKIM failed. So this script is intended to automatically parse all DMARC reports that come from email providers, but ONLY send you notification if SPF or DKIM failed - meaning that either someone tries to spoof your email or your DKIM/SPF is improperly set up. How it works script monitors postmaster email for DMARC reprots (rua) unpacks report and parses XML into JSON maps JSON and formats fields for MySQL/MariaDB input inputs into database sends notification on DKIM or SPF failure Remember to set up email input mailbox notification channels for slack for email
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Survey Insights scenario where survey participant responses can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, researchers can save days and even weeks of work breaking down cohorts of participants and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vT6m8XH8JWJTUAfwojc68NAUGC7q0lO7iV738J7aO5fuVjiVzdTRRPkMmT1C4N8TwejaiT0XrmF1Q48/pubhtml# How it works All survey questions and responses are imported from a Google Sheet. Responses are then inserted into a Qdrant collection carefully tagged with the question and survey metadata. For each question, all relevant response are put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the researcher. Requirements Survey data and format as shown in the attached google sheet. Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Add more clusters for open-ended questions and less clusters when responses are multiple choice.
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.