by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.
by Reyhan
Automatically clean up your Gmail inbox by deleting unwanted emails, validated by Gemini AI. Ideal for anyone tired of manual inbox cleanup, this workflow helps you save time while staying in control, with full transparency via Telegram alerts. How it works Scans Gmail inbox in adjustable 2-week batches Uses Gemini AI to decide if an email should be deleted or skipped Applies a label to skipped emails to avoid rechecking in future runs Deletes unwanted emails and sends a Telegram message with the AI's reasoning Also notifies on skipped emails, with explanation included Set up steps Connect your Gmail, Gemini AI, and Telegram accounts Adjust the AI baseline to control sensitivity (e.g. how strict the filtering should be) Set your batch range (default: last 2 weeks, adjustable) Define your Telegram chat/channel for notifications Note: Thanks to n8n's modular design, you can easily switch Gemini for another AI model (like OpenAI, Claude, etc.) or replace Telegram with Discord, Slack, or even email, no code changes needed, just swap the nodes.
by David Ashby
Complete MCP server exposing all Humantic AI Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Humantic AI Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Humantic AI Tool tool with full error handling 📋 Available Operations (3 total) Every possible Humantic AI Tool operation is included: 🔧 Profile (3 operations) • Create a profile • Get a profile • Update a profile 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Humantic AI Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Humantic AI Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Yaron Been
🔍 Scrape Glassdoor with Bright Data Designed for sales teams, recruiters, and marketers aiming to automate job discovery and prospecting. This workflow scrapes Glassdoor job listings using Bright Data and automatically generates targeted pitches using AI, streamlining lead identification and outreach. 🧩 How It Works This automation leverages n8n, Bright Data, Google Sheets, and OpenAI: 1. Trigger Starts with a custom form input (Location, Keyword, Country). 2. Bright Data Job Scrape Triggers a Bright Data dataset snapshot via HTTP Request. Polls snapshot progress using a Wait node, ensuring data readiness. Retrieves full job listings dataset once ready. 3. Google Sheets Integration Writes detailed job data (company, role, location, overview, metrics) into a Google Sheet. Uses a pre-built template for organized data storage. 4. Automated Pitch Generation (AI) Splits listings into actionable parts: company name, title, and description. Sends data to OpenAI (via LangChain) to generate relevant pitches or icebreakers. Saves generated content back into the same sheet for easy access. ✅ Requirements Ensure you have the following: Google Sheets Google account Template Sheet with columns for job details and AI-generated pitches Bright Data Active account with Dataset API access API key and dataset ID OpenAI Valid OpenAI API key for GPT models n8n Environment Nodes: HTTP Request, Wait, If, Google Sheets, Split Out, LangChain (OpenAI) Credentials: Google Sheets OAuth2 Bright Data API credentials OpenAI API key ⚙️ Setup Instructions Step 1: Prepare Google Sheets Copy the provided Google Sheets template Do not change headers Step 2: Import & Configure Workflow in n8n Import the workflow JSON file Set Google Sheets node: Link to your copied sheet Confirm correct tab name Step 3: Configure Bright Data Replace <YOUR_BRIGHT_DATA_API_KEY> with your real key Set your dataset ID in all HTTP Request nodes Step 4: Configure OpenAI (LangChain) Connect OpenAI API key to the LangChain node Customize prompt to match tone and outreach style Step 5: Testing & Scheduling Test via manual form trigger Schedule runs or leave form enabled for on-demand use 🧠 Tips & Best Practices Use specific keywords and locations for better results Adjust polling intervals based on dataset size Refine AI prompts regularly to improve pitch quality Clean unused columns from your sheet to boost performance 💬 Support & Feedback For help or customization: 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Mihai Farcas
This n8n workflow creates a financial analysis tool that generates reports on a company's quarterly earnings using the capabilities of OpenAI GPT-4o-mini, Google's Gemini AI and Pinecone's vector search. By analyzing PDFs of any company's earnings reports from their Investor Relations page, this workflow can answer complex financial questions and automatically compile findings into a structured Google Doc. How it works: Data loading and indexing Fetches links to PDF earnings document from a Google Sheet containing a list of file links. Downloads the PDFs from Google Drive. Parses the PDFs, splits the text into chunks, and generates embeddings using the Embeddings Google AI node (text-embedding-004 model). Stores the embeddings and corresponding text chunks in a Pinecone vector database for semantic search. Report generation with AI agent Utilizes an AI Agent node with a specifically crafted system prompt. The agent orchestrates the entire process. The agent uses a Vector Store Tool to access and retrieve information from the Pinecone database. Report delivery Saves the generated report as a Google Doc in a specified Google Drive location. Set up steps Google Cloud Project & Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Google AI API key: Obtain a Google AI API key from Google AI Studio. Pinecone account and API key: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-earnings in your Pinecone project. Google Drive - download and save financial documents: Go to a company you want to analize and download their quarterly earnings PDFs Save the PDFs in Google Drive Create a Google Sheet that stores a list of file URLs pointing to the PDFs you downloaded and saved to Google Drive Configure credentials in your n8n environment for: Google Sheets OAuth2 Google Drive OAuth2 Google Docs OAuth2 Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Import and configure the workflow: Import this workflow into your n8n instance. Update the List Of Files To Load (Google Sheets) node to point to your Google Sheet. Update the Download File From Google Drive to point to the column where the file URLs are Update the Save Report to Google Docs node to point to your Google Doc where you want the report saved.
by Jimleuk
This n8n template introduces the Dynamic Prompts AI workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the n8n Studio Episode here: https://www.youtube.com/watch?v=_fNAD1u8BZw Community post here: https://community.n8n.io/t/dynamic-prompts-with-n8n-baserow-and-airtable/72052 Looking for the Airtable Version? https://n8n.io/workflows/2771-ai-data-extraction-with-dynamic-prompts-and-airtable/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Baserow fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to complete the "create Baserow webhooks" steps to configure your baserow to send change events to the n8n template. Baserow webhooks are created in the Baserow web interface. Check the template for more instructions. Requirements Baserow for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by Davide
This workflow combines OpenAI, Retrieval-Augmented Generation (RAG), and WooCommerce to create an intelligent personal shopping assistant. It handles two scenarios: Product Search: Extracts user intent (keywords, price ranges, SKUs) and fetches matching products from WooCommerce. General Inquiries: Answers store-related questions (e.g., opening hours, policies) using RAG and documents stored in Google Drive. How It Works 1. Chat Interaction & Intent Detection Chat Trigger**: Starts when a user sends a message ("When chat message received"). Information Extractor**: Uses OpenAI to analyze the message and determine if the user is searching for a product or asking a general question. Extracts: search (true/false). keyword, priceRange, SKU, category (if product-related). Example: { "search": true, "keyword": "red handbags", "priceRange": { "min": 50, "max": 100 }, "SKU": "BAG123", "category": "women's accessories" } 2. Product Search (WooCommerce Integration) AI Agent**: If search: true, routes the request to the personal_shopper tool. WooCommerce Node: Queries the WooCommerce store using extracted parameters (keyword, priceRange, SKU). Filters products in stock (stockStatus: "instock"). Returns matching products (e.g., "red handbags under €100"). 3. General Inquiries (RAG System) RAG Tool**: If search: false, uses the Qdrant Vector Store to retrieve store information from documents. Google Drive Integration: Documents (e.g., store policies, FAQs) are stored in Google Drive. Downloaded, split into chunks, and embedded into Qdrant for semantic search. OpenAI Chat Model: Generates answers based on retrieved documents (e.g., "Our store opens at 9 AM"). Set Up Steps 1. Configure the RAG System Google Drive Setup**: Upload store documents . Update the Google Drive2 node with your folder ID. Qdrant Vector Database**: Clean the collection (update Qdrant Vector Store node with your URL). Use Embeddings OpenAI to convert documents into vectors. 2. Configure OpenAI & WooCommerce OpenAI Credentials**: Add your API key to all OpenAI nodes (OpenAI Chat Model, Embeddings OpenAI, etc.). WooCommerce Integration**: Connect your WooCommerce store (credentials in the personal_shopper node). Ensure product data is synced and accessible. 3. Customize the AI Agent Intent Detection**: Modify the Information Extractor’s system prompt to align with your store’s terminology. RAG Responses**: Update the tool description to reflect your store’s documents. Notes This template is ideal for e-commerce businesses needing a hybrid assistant for product discovery and customer support. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Chris Carr
Use Case When creating chatbots that interface through applications such as Telegram and WhatsApp, users can often sends multiple shorter messages in quick succession, in place of a single, longer message. This workflow accounts for this behaviour. What it Does This workflow allows users to send several messages in quick succession, treating them as one coherent conversation instead of separate messages requiring individual responses. How it Works When messages arrive, they are stored in a Supabase PostgreSQL table The system waits briefly to see if additional messages arrive If no new messages arrive within the waiting period, all queued messages are: Combined and processed as a single conversation Responded to with one unified reply Deleted from the queue Setup Create a table in Supabase called message_queue. It needs to have the following columns: user_id (uint8), message (text), and message_id (uint8) Add your Telegram, Supabase, OpenAI, and PostgreSQL credentials Activate the workflow and test by sending multiple messages the Telegram bot in one go Wait ten seconds after which you will receive a single reply to all of your messages How to Modify it to Your Needs Change the value of Wait Amount in the Wait 10 Seconds node in order to to modify the buffering window Add a System Message to the AI Agent to tailor it to your specific use case Replace the OpenAI sub-node to use a different language model
by Audun
A reusable and production-ready n8n workflow that secures public webhooks using Bearer Token authentication and dynamic request validation. ✨ What It Does Verifies Bearer Token** Compares the Authorization header with a configured secret token. Validates Required Fields** Checks that all expected fields are present in the incoming request body. Returns Standardized JSON Responses** 401 Unauthorized if token is missing or invalid 400 Bad Request if required fields are missing 200 OK with a custom success payload 👤 Who It’s For Developers exposing n8n workflows as APIs No-code/low-code builders integrating with external forms or tools Anyone needing simple authentication and validation on incoming webhooks 💡 Why Use It 🔒 Secure: Prevents unauthorized access to your public workflows 🧼 Clean: Centralized configuration for token and required fields ⚙️ Flexible: Easy to extend and customize for any use case 🛠 Setup Instructions Configure Values in the Configuration Node Set your secret token: config.bearerToken = YOUR_TOKEN Define required request fields by key: Example: config.requiredFields.message = true; config.requiredFields.email = true; ✅ Only the keys matter – values can be anything. Plug in Your Business Logic Replace the "Add workflow nodes here" with your own logic. Customize the Success Response Edit the Create Response node to shape your success payload. 🧪 Use Cases Securing public form submissions Creating internal API endpoints Validating data from external services 📌 Use this as a base for building secure, API-style workflows in n8n. 👋 Hello! I'm Audun / xqus If my n8n workflows saved you time or sparked ideas, consider sending a little support my way. It helps me keep building cool stuff — and maybe grab a coffee ☕ along the way!
by Joseph LePage
This n8n workflow template is designed to integrate a DeepSeek AI agent with Telegram, incorporating long-term memory capabilities for personalized and context-aware responses. Here's a detailed breakdown: Core Features Telegram Integration Uses a webhook to receive messages from Telegram users. Validates user identity and message content before processing. AI-Powered Responses Employs DeepSeek's AI models for conversational interactions. Includes memory capabilities to personalize responses based on past interactions. Error Handling Sends an error message if the input cannot be processed. Model Options 🧠 DeepSeek-V3 Chat**: Handles general conversational tasks. DeepSeek-R1 Reasoning**: Provides advanced reasoning capabilities for complex queries. Memory Buffer Window**: Maintains session context for ongoing conversations. Quick Setup 🛠️ Telegram Webhook Configuration Set up a webhook using the Telegram Bot API: https://api.telegram.org/bot{my_bot_token}/setWebhook?url={url_to_send_updates_to} Replace {my_bot_token} with your bot's token and {url_to_send_updates_to} with your n8n webhook URL. Verify the webhook setup using: https://api.telegram.org/bot{my_bot_token}/getWebhookInfo DeepSeek API Configuration Base URL: https://api.deepseek.com Obtain your API key from the DeepSeek platform. Implementation Details 🔧 User Validation The workflow validates the user's first name, last name, and ID using data from incoming Telegram messages. Only authorized users proceed to the next steps. Message Routing Routes messages based on their type (text, audio, or image) using a switch node. Ensures appropriate handling for each message format. AI Agent Interaction Processes text input using DeepSeek-V3 or DeepSeek-R1 models. Customizable system prompts define the AI's behavior and rules, ensuring user-centric and context-aware responses. Memory Management Retrieves long-term memories stored in Google Docs to enhance personalization. Saves new memories based on user interactions, ensuring continuity across sessions.
by Yaron Been
LinkedIn Enrichment & Ice Breaker Generator For SDRs, growth marketers, and founders looking to scale personalized outreach. This workflow enriches LinkedIn profile data using Bright Data and generates AI-powered ice breakers using Claude (Anthropic). It automates research and messaging to help you connect smarter and faster — without manual effort. 🧩 How It Works This workflow combines Google Sheets, Brigt Data, and Claude (Anthropic) to fully automate your outreach research: Trigger Manually trigger the workflow or run it on a schedule (via Manual Trigger or Schedule Trigger). Read Input Sheet Fetches rows from a Google Sheet. Each row must contain at least a Linkedin_URL_Person and row_number. Prepare Input Formats each row for Bright Data’s API using Set and SplitInBatches nodes. Enrich Profile (Bright Data API) Sends LinkedIn URLs to Bright Data’s Dataset API via HTTP Request. Waits for snapshot to be ready using polling logic with Wait, If, and Snapshot Progress nodes. Once ready, retrieves the enriched profile data including: Name City Current company About section Recent posts Update Sheet with Profile Data Writes the retrieved enrichment data into the corresponding row in Google Sheets (via row_number). Generate Ice Breaker (Claude AI) Sends enriched profile content to Claude (Anthropic) using a custom prompt. Focuses on recent posts for crafting relevant, respectful, 1–4-line ice breakers. Update Sheet with Ice Breaker Writes the generated ice breaker to the Ice Breaker 1 column in the original row. ✅ Requirements To use this workflow, you must have the following: Google Sheets A Google account A Google Sheet with at least one sheet/tab containing: Column: Linkedin_URL_Person Column: row_number (used for mapping input and output rows) Bright Data A Bright Data account with access to the Dataset API An active dataset that accepts LinkedIn URLs API key with Dataset API access Anthropic Claude An Anthropic API key (for Claude 3.5 Haiku or other Claude models) n8n Environment Access to HTTP Request, Set, Wait, SplitInBatches, If, and Google Sheets nodes Access to Claude integration (via LangChain nodes: @n8n/n8n-nodes-langchain) Credential manager properly configured with: Google Sheets OAuth2 credentials Bright Data API key Anthropic API key ⚙️ Setup Instructions Step 1: Copy the Google Sheets Template > 📄 Click here to make a copy Fill the Linkedin_URL_Person column with LinkedIn profile URLs you want to enrich Do not modify headers or add filters to the sheet Leave other columns (name, city, about, posts, ice breaker) blank — the workflow fills them Step 2: Connect Your Accounts in n8n Google Sheets: Create a credential under Google Sheets OAuth2 API Bright Data: Add your API key as a credential under HTTP Request (Authorization header) Anthropic: Create a credential for Anthropic API with your Claude key Step 3: Import and Configure the Workflow Import the workflow into your n8n instance. In each Google Sheets node: Select the copied Google Sheet Select the correct tab (usually input or Sheet1) In the HTTP Request node to Bright Data: Paste your Bright Data dataset ID In the Claude prompt node: Optionally adjust the tone and length of the ice breaker prompt Step 4: Run the Workflow Test it using the Manual Trigger node For daily automation, enable the Schedule Trigger and configure interval settings Watch your Google Sheet populate with enriched data and tailored ice breakers 🧠 Tips & Best Practices Bright Data Delay**: Snapshots may take time. The workflow polls the status until complete. Retry Protection**: If and Wait nodes avoid infinite loops by checking snapshot status. Mapping via row_number**: Critical to ensure data is updated in the right row. Prompt Engineering**: You can fine-tune Claude's behavior by editing the text prompt. 🧾 Output Example Once complete, each row in your Google Sheet will contain: | Linkedin_URL_Person | Name | City | Company | Recent Post | Ice Breaker | |---------------------|------|------|---------|-------------|--------------| | linkedin.com/... | Jane Doe | NYC | ACME Corp | “Why AI should replace meetings” | "Loved your post about AI and meetings — finally someone said it!" | 💬 Support & Feedback Questions? Want to tweak the prompt or expand the enrichment? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen
by Bela
How it works: Webhook URL that responds to Requests with an AI generated Image based on the prompt provided in the URL. Setup Steps: Ideate your prompt URL Encode The Prompt (as shown in the Template) Authenticate with your OpenAI Credentials Put together the Webhook URL with your prompt and enter into a webbrowser In this way you can expose a public url to users, employee's etc. without exposing your OpenAI API Key to them. Click here to find a blog post with additional information.