by victor de coster
The template allows to make Dropcontact batch requests up to 250 requests every 10 minutes (1500/hour). Valuable if high volume email enrichment is expected. Dropcontact will look for email & basic email qualification if first_name, last_name, company_name is provided. +++++++++++++++++++++++++++++++++++++++++ Step 1: Node "Profiles Query" Connect your own source (Airtable, Google Sheets, Supabase,...) the template is using Postgres by default. Note I: Be careful your source is only returning a maximum of 250 items. Note II: The next node uses the next variables, make sure you can map these from your source file: first_name last_name website (company_name would work too) full_name (see note) Note III: This template is using the Dropcontact Batch API, which works in a POST & GET setup. Not a GET request only to retrieve data, as Dropcontact needs to process the batch data load properly. +++++++++++++++++++++++++++++++++++++++++ Step 2: Node "Data Transformation" Will transform the input variables in the proper json format. This json format is expected from the Dropcontact API to make a batch request. "full_name" is being used as a custom identifier to update the returned email to the proper contact in your source database. To make things easy, use a unique identiefer in the full_name variable. +++++++++++++++++++++++++++++++++++++++++ Step3: Node: "Bulk Dropcontact Requests". Enter your Dropcontact credentials in the node: Bulk Dropcontact Requests. +++++++++++++++++++++++++++++++++++++++++ Step4: Connect your output source by mapping the data you like to use. +++++++++++++++++++++++++++++++++++++++++ Step5: Node: "Slack" (OPTIONAL) Connect your slack account, if an error occur, you will be notified. TIP: Try to run the workflow with a batch of 10 (not 250) as it might need to run initially before you will be able to map the data to your final destination. Once the data fields are properly mapped, adjust back to 250.
by Akhil Varma Gadiraju
Workflow: HubSpot Contact Email Validation with Hunter.io Overall Goal This workflow retrieves contacts from HubSpot that have an email address but haven't yet had their email validated by Hunter. It then iterates through each of these contacts, uses Hunter.io to verify their email, updates the contact record in HubSpot with the validation status and date, and finally sends a summary email notification upon completion. How it Works (Step-by-Step Breakdown) Node: "When clicking ‘Test workflow’" (Manual Trigger) Type:** n8n-nodes-base.manualTrigger Purpose:** Start the workflow manually via the n8n interface. Output:** Triggers workflow execution. Node: "HubSpot" (HubSpot) Type:** n8n-nodes-base.hubspot Purpose:** Fetch contacts from HubSpot. Configuration:** Authentication: App Token Operation: Search for contacts Return All: True Filter Groups: Contact HAS_PROPERTY email Contact NOT_HAS_PROPERTY hunter_email_validation_status Output:** List of contact objects. Node: "Loop Over Items" (SplitInBatches) Type:** n8n-nodes-base.splitInBatches Purpose:** Process each contact one-by-one. Configuration:** Options > Reset: false Output:** Output 1 to "Hunter" Output 2 to "Send Email" Node: "Hunter" (Inside the loop) Type:** n8n-nodes-base.hunter Purpose:** Verify email with Hunter.io Configuration:** Operation: Email Verifier Email: {{ $json.properties.email }} Node: "Add Hunter Details (Contact)" (HTTP Request - Inside the loop) Type:** n8n-nodes-base.httpRequest Purpose:** Update HubSpot contact. Configuration:** Method: PATCH URL: https://api.hubapi.com/crm/v3/objects/contacts/{{ $('Loop Over Items').item.json.id }} Headers: Content-Type: application/json Body (JSON): { "properties": { "hunter_email_validation_status": "{{ $json.status }}", "hunter_verification_date": "{{ $now.format('yyyy-MM-dd') }}" } } Node: "Wait" (Inside the loop) Type:** n8n-nodes-base.wait Purpose:** Avoid API rate limits. Configuration:** Wait for 1 second. Node: "Replace Me" (NoOp - Inside the loop) Type:** n8n-nodes-base.noOp Purpose:** Junction node to complete the loop. Node: "Send Email" (After the loop completes) Type:** n8n-nodes-base.emailSend Purpose:** Send summary notification. Configuration:** From Email: test@gmail.com To Email: akhilgadiraju@gmail.com Subject: "Email Verification Completed for Your HubSpot Contacts" HTML: Formatted confirmation message Sticky Notes "HubSpot": Create custom properties (hunter_email_validation_status, hunter_verification_date). "Add Hunter Details": Ensure field names match HubSpot properties. "Wait": Prevent API rate limits. How to Customize It Trigger Replace Manual Trigger with Schedule Trigger (Cron) for automation. Optionally use HubSpot Trigger for new contact events. HubSpot Node Create matching custom properties. Adjust filters and returned properties as needed. Hunter Node Minimal customization needed. HTTP Request Node Update JSON property names if renaming in HubSpot. Customize date format as needed. Wait Node Adjust wait time to balance speed and API safety. Email Node Customize email addresses, subject, and body. Add dynamic contact count with a Set or Function node. Error Handling Add Error Trigger nodes. Use If nodes inside loop to act on certain statuses. Use Cases Clean your email list. Enrich CRM data. Prep verified lists for campaigns. Automate contact hygiene on a schedule. Required Credentials HubSpot App Token Used by: HubSpot node and HTTP Request node Create a Private App in HubSpot with required scopes. Hunter API Used by: Hunter node SMTP Used by: Email Send node Configure host, port, username, and password. Made with ❤️ using n8n by Akhil.
by Akhil Varma Gadiraju
AI-Powered GitHub Commit Reviewer Overview Workflow Name: AI-Powered GitHub Commit Reviewer Author: Akhil Purpose: This n8n workflow triggers on a GitHub push event, fetches commit diffs, formats them into HTML, runs an AI-powered code review using Groq LLM, and sends a detailed review via email. How It Works (Step-by-Step) 1. GitHub Trigger Node Type**: n8n-nodes-base.githubTrigger Purpose**: Initiates the workflow on GitHub push events. Repo**: akhilv77/relevance Output**: JSON with commit and repo details. 2. Parser Node Type**: n8n-nodes-base.set Purpose**: Extracts key info (repo ID, name, commit SHA, file changes). 3. HTTP Request Node Type**: n8n-nodes-base.httpRequest Purpose**: Fetches commit diff details using GitHub API. Auth**: GitHub OAuth2 API. 4. Code (HTML Formatter) Node Type**: n8n-nodes-base.code Purpose**: Formats commit info and diffs into styled HTML. Output**: HTML report of commit details. 5. Groq Chat Model Node Type**: @n8n/n8n-nodes-langchain.lmChatGroq Purpose**: Provides the AI model (llama-3.1-8b-instant). 6. Simple Memory Node Type**: @n8n/n8n-nodes-langchain.memoryBufferWindow Purpose**: Maintains memory context for AI agent. 7. AI Agent Node Type**: @n8n/n8n-nodes-langchain.agent Purpose**: Executes AI-based code review. Prompt**: Reviews for bugs, style, grammar, and security. Outputs styled HTML. 8. Output Parser Node Type**: n8n-nodes-base.code Purpose**: Combines commit HTML with AI review into one HTML block. 9. Gmail Node Type**: n8n-nodes-base.gmail Purpose**: Sends review report via email. Recipient**: akhilgadiraju@gmail.com 10. End Workflow Node Type**: n8n-nodes-base.noOp Purpose**: Marks the end. Customization Tips GitHub Trigger**: Change repo/owner or trigger events. HTTP Request**: Modify endpoint to get specific data. AI Agent**: Update the prompt to focus on different review aspects. Groq Model**: Swap for other supported LLMs if needed. Memory**: Use dynamic session key for per-commit reviews. Email**: Change recipient or email styling. Error Handling Use Error Trigger nodes to handle failures in: GitHub API requests LLM generation Email delivery Use Cases Instant AI-powered feedback on code pushes. Pre-human review suggestions. Security and standards enforcement. Developer onboarding assistance. Required Credentials | Credential | Used By | Notes | |-----------|---------|-------| | GitHub API (ID PSygiwMjdjFDImYb) | GitHub Trigger | PAT with repo and admin:repo_hook | | GitHub OAuth2 API | HTTP Request | OAuth2 token with repo scope | | Groq - Akhil (ID HJl5cdJzjhf727zW) | Groq Chat Model | API Key from GroqCloud | | Gmail OAuth2 - Akhil (ID wqFUFuFpF5eRAp4E) | Gmail | Gmail OAuth2 for sending email | Final Note Made with ❤️ using n8n by Akhil.
by Samir Saci
Context Hey! I'm Samir, a Supply Chain Data Scientist from Paris who spent six years in China studying and working while struggling to learn Mandarin. I know the challenges of mastering a complex language like Chinese and my greatest support was flash cards. Therefore, I designed this workflow to support fellow Mandarin learners by automating flashcard creation using n8n, so they can focus more on learning and less on manual data entry. 📬 For business inquiries, you can add me on Here Who is this template for? This workflow template is designed for language learners and educators who want to automate the creation of flashcards for Mandarin (or any other language) using Google Translate API, an AI agent for phonetic transcription and generating an illustrative sentence and a free image retrieval API. Why? If you use the open-source application Anki, this workflow will help you automatically generate personalized study materials. How? Let us imagine you want to learn how to say the word Contract in Mandarin. The workflow will automatically Translate the word in Simplified Mandarin (Mandarin: 合同). Provide the phonetic transcription (Pinyin: Hétóng) Generate an example sentence (Example: 我们签订了一份合同.) Download an illustrative picture (For example, a picture of a contract signature) All these fields are automatically recorded in a Google Sheet, making it easy to import into Anki and generate flashcards instantly What do I need to start? This workflow can be used with the free tier plans of the services used. It does not require any advanced programming skills. Prerequisite A Google Drive Account with a folder including a Google Sheet API Credentials: Google Drive API, Google Sheets API and Google Translate API activated with OAuth2 credentials A free API key of pexels.com A google sheet with the columns Next Follow the sticky notes to set up the parameters inside each node and get ready to pump your learning skills. I have detailed the steps in a short tutorial 👇 🎥 Check My Tutorial Notes This workflow can be used for any language. In the AI Agent prompt, you just need to replace the word pinyin with phonetic transcription. You can adapt the trigger to operate the workflow in the way you want. These operations can be performed by batch or triggered by Telegram, email, or webhook. If you want to learn more about how I used Anki flash cards to learn mandarin: 🈷️ Blog Article about Anki Flash Cards This workflow has been created with N8N 1.82.1 Submitted: March 17th, 2025
by Oneclick AI Squad
Overview This workflow retrieves airline web check-in URLs from Google Sheets, scrapes their content, employs an LLM to generate structured JSON data, refreshes the sheet, creates embeddings, and saves them in a Postgres vector DB for future semantic searches or question-answering. Quick Notes Verify that Google Sheets has accurate URLs for scraping. Ensure the Postgres vector DB is set up correctly for embedding storage. Process Flow Start the workflow with the Chat Trigger - Start node. Retrieve airline check-in URLs using the Fetch Airline URLs node. Scrape webpage data with the Scrape Airline Webpage node. Extract JSON data using the Extract info with LLM node with a Chat Model. Pause for a response with the Wait for Response node. Update Google Sheets with the Store Extracted Data node. Create embeddings with the Generate Embeddings node and store in Postgres vector DB with the Save to Vector DB node. Break down long text with the Split Long Text node and delay the next batch with the Wait Before Next Batch node. Getting Started Import the workflow into n8n and set up Google Sheets and Postgres vector DB credentials. Run a test with a sample URL to confirm scraping and embedding storage. Tailored Adjustments Tweak the Extract info with LLM node to adjust JSON output or modify the Fetch Airline URLs node to pull from different sheet fields.
by Charles
Modern AI systems are powerful but pose privacy risks when handling sensitive data. Organizations need AI capabilities while ensuring: ✅ Sensitive data never leaves secure environments ✅ Compliance with regulations (GDPR, HIPAA, PCI, SOX) ✅ Real-time decision making about data sensitivity ✅ Comprehensive audit trails for regulatory review The Concept: Intelligent Data Classification + Smart Routing The goal of this concept is to build the foundations of the safe and compliant use of LLMs in Agentic workflows by automatically detecting sensitive data, applying sanitization rules, and intelligently routing requests through secure processing channels. This workflow will analyze the user's chat or webhook input and attempt to detect PII using the Enhanced PII Pattern Detector. If detected, the workflow will process that input via a series of Compliance, Auditing, and Security steps which log and sanitizes the request prior to any LLM being pinged. Why Multi-Tier Routing? Traditional systems use binary decisions (sensitive/not sensitive). Our 3-tier approach provides: ✅ Granular Security: Critical PII gets maximum protection ✅ Performance Optimization: Clean data gets full cloud capabilities ✅ Cost Efficiency: Expensive local processing only when needed ✅ User Experience: Maintains conversational flow across security levels Why Context-Aware Detection? Regex patterns alone miss contextual sensitivity. Our approach: ✅ Catches Intent: "Bank account" discussion is sensitive even without account numbers ✅ Reduces False Negatives: Medical discussions stay secure even without explicit medical IDs ✅ Proactive Protection: Identifies sensitive contexts before PII is shared ✅ Compliance Alignment: Matches how regulations actually define sensitive data Why Risk Scoring vs Binary Classification? Binary PII detection creates artificial boundaries. Risk scoring provides: ✅ Nuanced Decisions: Multiple low-risk patterns might aggregate to high risk ✅ Adaptive Thresholds: Organizations can adjust sensitivity based on their needs ✅ Better UX: Users aren't unnecessarily restricted for low-risk scenarios ✅ Audit Transparency: Clear reasoning for every routing decision Why Comprehensive Monitoring? Privacy systems require trust and verification: ✅ Compliance Proof: Audit trails demonstrate regulatory compliance ✅ Performance Optimization: Identify bottlenecks and improve efficiency ✅ Security Validation: Ensure no sensitive data leakage occurs ✅ Operational Insights: Understand usage patterns and system health How to Install: All that you will need for this workflow are credentials for your LLM providers such as Ollama, OpenRouter, OpenAI, Anthropic, etc. This workflow is customizable and allows the user to define the best LLM and storage/memory solutions for their specific use case.
by Yulia
Free template for voice & text messages with short-term memory This n8n workflow template is a blueprint for an AI Telegram bot that processes both voice and text messages. Ready to use with minimal setup. The bot remembers the last several messages (10 by default), understands commands and provides responses in HTML. You can easily swap GPT-4 and Whisper for other language and speech-to-text models to suit your needs. Core Features Text: send or forward messages Voice: transcription via Whisper Extend this template by adding LangChain tools. Requirements Telegram Bot API OpenAI API (for GPT-4 and Whisper) 💡 New to Telegram bots? Check our step-by-step guide on creating your first bot and setting up OpenAI access. Use Cases Personal AI assistant Customer support automation Knowledge base interface Integration hub for services that you use: Connect to any API via HTTP Request Tool Trigger other n8n workflows with Workflow Tool
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically analyzes purchase trends and consumer behavior patterns to identify market opportunities and optimize business strategies. It saves you time by eliminating the need to manually analyze sales data and provides insights into buying patterns, seasonal trends, and customer preferences. Overview This workflow automatically scrapes e-commerce platforms, marketplace data, and sales analytics to extract purchase trends, product popularity, and consumer behavior insights. It uses Bright Data to access sales data and AI to intelligently analyze purchasing patterns, seasonal trends, and market opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping e-commerce and marketplace platforms without being blocked OpenAI**: AI agent for intelligent purchase trend analysis and forecasting Google Sheets**: For storing purchase trend data and analysis results How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your trend analysis spreadsheet Customize: Define target marketplaces and trend analysis parameters Use Cases E-commerce Strategy**: Identify trending products and market opportunities Product Development**: Understand consumer preferences and demand patterns Marketing Planning**: Optimize campaigns based on seasonal purchase trends Business Intelligence**: Make data-driven decisions using market trend insights Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #purchasetrends #marketanalysis #brightdata #webscraping #ecommerce #n8nworkflow #workflow #nocode #trendanalysis #consumerinsights #marketresearch #salesanalytics #businessintelligence #markettrends #customerinsights #ecommerceanalysis #salesdata #marketforecasting #consumerdata #purchaseanalysis #retailanalytics #marketinsights #demandforecasting #salestrends #consumertrends #marketintelligence #buyingpatterns #marketdemand
by n8n Team
This workflow connects Telegram bots with LangChain nodes in n8n. The main AI Agent Node is configured as a Conversation Agent. It has a custom System Prompt which explains the reply formatting and provides some additional instructions. The AI Agent has several connections: OpenAI GPT-4 model is called to generate the replies Window Buffer Memory stores the history of conversation with each user separately There is an additional Custom n8n Workflow tool (Dall-E 3 Tool). AI Agent uses this tool when the user requests an image generation. In the lower part of the workflow, there is a series of nodes that call Dall-E 3 model with the user Telegram ID and a prompt for a new image. Once image is ready, it is sent back to the user. Finally, there is an extra Telegram node that masks HTML syntax for improved stability in case the AI Agent replies using the unsupported format.
by Michael Gullo
Automate Drafts From Google Drive This workflow automates the end-to-end process of extracting and summarizing information from PDFs stored in a specific Google Drive folder. When a new PDF or any binary data is added, the workflow is triggered and begins by downloading and processing the PDF to extract all available text. If multiple PDFs are detected, their content is aggregated into a single, combined dataset. This automation eliminates the time consuming task of manually reading, taking notes, and drafting documents. By removing this burden, users can focus on more meaningful tasks while the workflow handles the repetitive, tedious work. The extracted content is then passed through an AI-powered information extractor that identifies key details such as names, dates, addresses, and any other structured data points the user wants to extract from the PDF. This step is highly customizable, allowing the user to define exactly what type of information should be extracted. While the workflow is designed to extract all available content from the PDF, specifying additional structured data points ensures that critical details are accurately captured. A second OpenAI Node uses the extracted information to draft a professional, formal summary suitable for documentation. This is the most important part of the workflow and can be fully customized to meet the user's specific needs. By editing the prompts, users can tailor the workflow to generate a wide variety of draft formats based on the extracted content. The workflow then generates a new Google Document containing the full draft and composes an email summarizing the key points in 3 to 5 bullet points. This email is automatically sent to the designated recipient along with a direct link to the Google Doc. This solution is ideal for insurance, legal, or administrative use cases where timely, accurate extraction and reporting from incoming PDFs is essential. How To Use The Workflow Step 1 - Place any binary data (e.g., PDF files) into the designated Google Drive folder. Step 2 - The workflow will automatically download each PDF, extract the text, and if multiple PDFs are present combine them into a single dataset for analysis. Step 3 - The OpenAI Draft Agent will analyze the extracted information, generate a formal draft, and create a Google Document. This document will be updated with the draft content and saved back into the same Google Drive folder. Step 4 - An email will be sent to the designated recipient(s), including a summary of the draft and key extracted information, along with a link to view the Google Document. Need Help? Have Questions? For consulting and support, or if you have questions, please feel free to connect with me on LinkedIn or email michael.gullo@outlook.com.
by Oneclick AI Squad
This AI-powered workflow reads emails, understands the request using an LLM, and creates structured Jira issues. Key Insights Poll for new emails every 5 minutes; ensure Gmail/IMAP is properly configured. AI analysis requires a reliable LLM model (e.g., Chat Model or AI Tool). Workflow Process Trigger the workflow with the Check for New Emails Gmail Trigger node. Fetch full email content using the Fetch Full Email Content get message node. Analyze email content with the Analyze Email & Extract Tasks node using AI. Parse the AI-generated JSON output into tasks with the Parse JSON Output from AI node. Create the main Jira issue with the Jira - Create Main Issue create: issue node. Split subtasks from JSON and create them with the Split Subtasks JSON Items and Create Subtasks create: issue nodes. Usage Guide Import the workflow into n8n and configure Gmail and Jira credentials. Test with a sample email to ensure ticket creation and subtask assignment. Prerequisites Gmail/IMAP credentials for email polling Jira API credentials with issue creation permissions Customization Options Adjust the Analyze Email & Extract Tasks node to refine AI task extraction or modify the polling frequency in the trigger node.
by Angel Menendez
Who's it for This workflow is ideal for AI developers running multi-agent systems in n8n who need to quantitatively evaluate tool usage behavior. If you're building autonomous agents and want to verify their decisions against ground-truth expectations, this workflow gives you plug-and-play observability. What it does This template uses n8n's built-in Evaluation Trigger and Evaluation nodes to assess whether an AI agent correctly used all the expected tools. It supports: Dataset-driven testing of agent behavior Logging actual tools to compare them with the expected tools Assigning performance metrics (tool_called = true/false) Persisting output back to Google Sheets for further debugging The workflow can be triggered by either the chat input or the dataset row evaluation. It routes through a multi-tool agent node powered by the best LLMs. The agent has access to tools such as web search, calculator, vector search, and summarizer tools. The workflow then aims to validate tool use decisions by extracting the intermediate steps from the agent (i.e., action + observation) and comparing the tools that were called with the expected tools. If the tools that were called during the workflow execution match, then it's a pass; otherwise, it's documented as a fail. The evaluation nodes take care of that process. How to set it up Connect your Google Sheets OAuth2 credential. Replace the document with your own test dataset. Set your desired models and configure the different agent tools, such as the summarizer and vector store. The default vector store used is Qdrant, so the user must create this vector store with a few samples of queries + web search results. Run from either the chat trigger or the evaluation trigger to test. Requirements Google Sheets OAuth2 credential OpenRouter / OpenAI credentials for AI agents and embeddings Firecrawl and Qdrant credentials for web + vector search How to customize Edit the Search Agent system message to define tool selection behavior Add more metric columns in the Evaluation node for complex scoring Add new tool nodes and link them to the agent block Swap in your own summarizer