by Amine ARAGRAG
This n8n template automates the collection and enrichment of Product Hunt posts using AI and Google Sheets. It fetches new tools daily, translates content, categorizes them intelligently, and saves everything into a structured spreadsheet—ideal for building directories, research dashboards, newsletters, or competitive intelligence assets. Good to know Sticky notes inside the workflow explain each functional block and required configurations. Uses cursor-based pagination to safely fetch Product Hunt data. AI agent handles translation, documentation generation, tech extraction, and function area classification. Category translations are synced with a Google Sheets dictionary to avoid duplicates. All enriched entries are stored in a clean “Tools” sheet for easy filtering or reporting. How it works A schedule trigger starts the workflow daily. Product Hunt posts are retrieved via GraphQL and processed in batches. A code node restructures each product into a consistent schema. The workflow checks if a product already exists in Google Sheets. For new items, the AI agent generates metadata, translations, and documentation. Categories are matched or added to a Google Sheets dictionary. The final enriched product entry is appended or updated in the spreadsheet. Pagination continues until no next page remains. How to use Connect Product Hunt OAuth2, Google Sheets, and OpenAI credentials. Adjust the schedule trigger to your preferred frequency. Optionally expand enrichment fields (tags, scoring, custom classifications). Replace the trigger with a webhook or manual trigger if needed. Requirements Product Hunt OAuth2 credentials Google Sheets account OpenAI (or compatible) API access Customising this workflow Add Slack or Discord notifications for new tools. Push enriched data to Airtable, Notion, or a database. Extend AI enrichment with summaries or SEO fields. Use the Google Sheet as a backend for dashboards or frontend applications.
by Nguyen Thieu Toan
🤖 Facebook Messenger Smart Chatbot – Batch, Format & Notify with n8n Data Table by Nguyen Thieu Toan 🌟 What Is This Workflow? This is a smart chatbot solution built with n8n, designed to integrate seamlessly with Facebook Messenger. It batches incoming messages, formats them for clarity, tracks conversation history, and sends natural replies using AI. Perfect for businesses, customer support, or personal AI agents. ⚙️ Key Features 🔄 Smart batching: Groups consecutive user messages to process them in one go, avoiding fragmented replies. 🧠 Context formatting: Automatically formats messages to fit Messenger’s structure and length limits. 📋 Conversation history tracking: Stores and retrieves chat logs between user and bot using n8n Data Table. 👀 Seen & Typing effects: Adds human-like responsiveness with Messenger’s sender actions. 🧩 AI Agent integration: Easily connects to GPT, Gemini, or any LLM for natural replies, scheduling, or business logic. 🚀 How It Works Connects to your Facebook Page via webhook to receive and send messages. Stores incoming messages in a Data Table called Batch_messages, including fields like user_text, bot_rep, processed, etc. Collects unprocessed messages, sorts them by id, and creates a merged_message and full history. Sends the history to an AI Agent for contextual response generation. Sends the AI reply back to Messenger with Seen/Typing effects. Updates the message status to processed = true to prevent duplicate handling. 🛠️ Setup Guide Create a Facebook App and Messenger webhook, link it to your Page. Set up the Batch_messages Data Table in n8n with required columns. Import the workflow or build nodes manually using the tutorial. Configure your API tokens, webhook URLs, and AI Agent endpoint. Deploy the workflow on a public n8n server. 📘 Full tutorial available at: 👉 Smart Chatbot Workflow Guide by Nguyen Thieu Toan 💡 Pro Tips Customize the AI prompt and persona to match your business tone. Add scheduling, lead capture, or CRM integration using n8n’s flexible nodes. Monitor your Data Table regularly to ensure clean message flow and batching. 👤 About the Creator Nguyen Thieu Toan (Nguyễn Thiệu Toàn/Jay Nguyen) is an expert in AI automation, business optimization, and chatbot development. With a background in marketing and deep knowledge of n8n workflows, Jay helps businesses harness AI to save time, boost performance, and deliver smarter customer experiences. Website: https://nguyenthieutoan.com
by Vitorio Magalhães
🎯 What this workflow does This workflow automatically monitors Reddit subreddits for new image posts and downloads them to Google Drive. It's perfect for content creators, meme collectors, or anyone who wants to automatically archive images from their favorite subreddits without manual work. The workflow intelligently prevents duplicate downloads by checking existing files in Google Drive and sends you Telegram notifications about the download status, so you always know when new content has been saved. 🚀 Key Features Multi-subreddit monitoring**: Configure multiple subreddits to monitor simultaneously Smart duplicate detection**: Never downloads the same image twice Automated scheduling**: Runs on a customizable cron schedule Real-time notifications**: Get instant Telegram updates about download activity Rate limit friendly**: Built-in delays to respect Reddit's API limits Cloud storage integration**: Direct upload to organized Google Drive folders 📋 Prerequisites Before using this workflow, you'll need: Reddit Developer Account**: Create an app at reddit.com/prefs/apps Google Cloud Project**: With Drive API enabled and OAuth2 credentials Telegram Bot**: Created via @BotFather with your chat ID Basic n8n knowledge**: Understanding of credentials and node configuration ⚙️ Setup Instructions 1. Configure Reddit API Access Visit reddit.com/prefs/apps and create a new "script" type application Note your Client ID and Client Secret Add Reddit OAuth2 credentials in n8n 2. Set up Google Drive Integration Enable Google Drive API in Google Cloud Console Create OAuth2 credentials with appropriate scopes Configure Google Drive OAuth2 credentials in n8n Update the folder ID in the workflow to your desired destination 3. Configure Telegram Notifications Create a bot via @BotFather on Telegram Get your chat ID (message @userinfobot) Add Telegram API credentials in n8n 4. Customize Your Settings Update the Settings node with: Your Telegram chat ID List of subreddits to monitor (e.g., ['memes', 'funny', 'pics']) Optional: Adjust wait time between requests Optional: Modify the cron schedule 🔄 How it works Scheduled Trigger: The workflow starts automatically based on your cron configuration Random Selection: Picks a random subreddit from your configured list Fetch Posts: Retrieves the latest 30 posts from the subreddit's "new" section Image Filtering: Keeps only posts with i.redd.it image URLs Duplicate Check: Searches Google Drive to avoid re-downloading existing images Download & Upload: Downloads new images and uploads them to your Drive folder Notification: Sends a Telegram message with the download summary 🛠️ Customization Options Scheduling Modify the cron trigger to run hourly, daily, or at custom intervals Add timezone considerations for your location Content Filtering Add upvote threshold filters to get only popular content Filter by image dimensions or file size Implement NSFW content filtering Storage & Organization Create subfolders by subreddit Add date-based folder organization Implement file naming conventions Notifications & Monitoring Add Discord webhook notifications Create download statistics tracking Log failed downloads for debugging 📊 Use Cases Content Creators**: Automatically collect memes and trending images for social media Digital Marketers**: Monitor visual trends across different communities Researchers**: Archive visual content from specific subreddits for analysis Personal Use**: Build a curated collection of images from your favorite subreddits 🎯 Best Practices Respect Rate Limits**: Keep the wait time between requests to avoid being blocked Monitor Storage**: Regularly check Google Drive storage usage Subreddit Selection**: Choose active subreddits with regular image posts Credential Security**: Use n8n's credential system and never hardcode API keys 🚨 Important Notes This workflow only downloads images from i.redd.it (Reddit's image host) Some subreddits may have bot restrictions Reddit's API has rate limits (~60 requests per minute) Ensure your Google Drive has sufficient storage space Always comply with Reddit's Terms of Service and content policies
by Khair Ahammed
Meet Troy, your intelligent personal assistant that seamlessly manages your Google Calendar and Tasks through Telegram. This workflow combines AI-powered natural language processing with MCP (Model Context Protocol) integration to provide a conversational interface for scheduling meetings, managing tasks, and organizing your digital life. Key Features 📅 Smart Calendar Management Create single and recurring events with conflict detection Support for multiple attendees (1-2 attendee variants) Automatic time zone handling (Bangladesh Standard Time) Weekly recurring event scheduling Event retrieval, updates, and deletion ✅ Task Management Create, update, and delete tasks in Google Tasks Mark tasks as completed Retrieve task lists with completion status Task repositioning and organization Parent-child task relationships 🤖Intelligent Processing Natural language understanding for scheduling requests Automatic conflict detection before event creation Context-aware responses with conversation memory Error handling with fallback messages 📱 Telegram Interface Real-time chat interaction Simple commands and natural language Instant confirmations and updates Error notifications Workflow Components Core Architecture: Telegram Trigger for user messages AI Agent with GPT-4o-mini processing MCP Client Tools for Google services Conversation memory for context Error handling with backup responses MCP Integrations: Google Calendar MCP Server (6 specialized tools) Google Tasks MCP Server (5 task operations) Custom HTTP tool for advanced task positioning Use Cases Calendar Scenarios: "Schedule a meeting tomorrow at 3 PM with john@example.com" "Set up weekly team standup every Monday at 10 AM" "Check my calendar for conflicts this afternoon" "Delete the meeting with ID xyz123" Task Management: "Add a task to buy groceries" "Mark the project report task as completed" "Update my presentation task due date to Friday" "Show me all pending tasks" Setup Requirements Required Credentials: Google Calendar OAuth2 Google Tasks OAuth2 OpenAI API key Telegram Bot token ** MCP Configuration:** Two MCP server endpoints for Google services Proper webhook configurations SSL-enabled n8n instance for MCP triggers Business Benefits Productivity: Voice-to-action task and calendar management *Efficiency: *Eliminate app switching with chat interface Intelligence: AI prevents scheduling conflicts automatically Accessibility: Simple Telegram commands for complex operations Technical Specifications Components: 1 Telegram trigger 1 AI Agent with memory 2 MCP triggers (Calendar & Tasks) 13 Google service tools Error handling flows Response Time: Sub-second for most operations *Memory: *Session-based conversation context Timezone: Automatic Bangladesh Standard Time conversion This personal assistant transforms how you interact with Google services, making scheduling and task management as simple as sending a text message to Troy on Telegram. Tags: personal-assistant, mcp-integration, google-calendar, google-tasks, telegram-bot, ai-agent, productivity
by vinci-king-01
Software Vulnerability Patent Tracker ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud, v1.7.0+) ScrapeGraphAI community node installed Basic understanding of patent search syntax (for customizing keyword sets) Optional: Intercom account for in-app alerts Required Credentials | Credential | Purpose | |------------|---------| | ScrapeGraphAI API Key | Enables ScrapeGraphAI nodes to fetch and parse patent-office webpages | | Intercom Access Token (optional) | Sends weekly digests directly to an Intercom workspace | Additional Setup Requirements | Setting | Recommended Value | Notes | |---------|-------------------|-------| | Cron schedule | 0 9 * * 1 | Triggers every Monday at 09:00 server time | | Patent keyword matrix | See example CSV below | List of comma-separated keywords per tech focus | Example keyword matrix (upload as keywords.csv or paste into the “Matrix” node): topic,keywords Buffer Overflow,"buffer overflow, stack smashing, stack buffer" Memory Safety,"memory safety, safe memory allocation, pointer sanitization" Code Injection,"SQL injection, command injection, injection prevention" How it works This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Key Steps: Schedule Trigger**: Fires weekly based on the configured cron expression. Matrix (Keyword Loader)**: Loads the CSV-based technology keyword matrix into memory. Code (Build Search Queries)**: Dynamically assembles patent-search URLs for each keyword group. ScrapeGraphAI (Fetch Results)**: Scrapes USPTO, EPO, and WIPO result pages and parses titles, abstracts, publication numbers, and dates. If (Relevance Filter)**: Removes patents older than 1 year or without vulnerability-related terms in the abstract. Set (Normalize JSON)**: Formats the remaining records into a uniform JSON schema. Intercom (Notify Team)**: Sends a summarized digest to your chosen Intercom workspace. (Skip or disable this node if you prefer to consume the raw JSON output instead.) Sticky Notes**: Contain inline documentation and customization tips for future editors. Set up steps Setup Time: 10-15 minutes Install Community Node Navigate to “Settings → Community Nodes”, search for ScrapeGraphAI, and click “Install”. Create Credentials Go to “Credentials” → “New Credential” → select ScrapeGraphAI API → paste your API key. (Optional) Add an Intercom credential with a valid access token. Import the Workflow Click “Import” → “Workflow JSON” and paste the template JSON, or drag-and-drop the .json file. Configure Schedule Open the Schedule Trigger node and adjust the cron expression if a different frequency is required. Upload / Edit Keyword Matrix Open the Matrix node, paste your custom CSV, or modify existing topics & keywords. Review Search Logic In the Code (Build Search Queries) node, review the base URLs and adjust patent databases as needed. Define Notification Channel If using Intercom, select your Intercom credential in the Intercom node and choose the target channel. Execute & Activate Click “Execute Workflow” for a trial run. Verify the output. If satisfied, switch the workflow to “Active”. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow on a weekly cron schedule. Matrix** – Holds the CSV keyword table and makes each row available as an item. Code (Build Search Queries)** – Generates search URLs and attaches meta-data for later nodes. ScrapeGraphAI** – Scrapes patent listings and extracts structured fields (title, abstract, pub. date, link). If (Relevance Filter)** – Applies date and keyword relevance filters. Set (Normalize JSON)** – Maps scraped fields into a clean JSON schema for downstream use. Intercom** – Sends formatted patent summaries to an Intercom inbox or channel. Sticky Notes** – Provide inline documentation and edit history markers. Data Flow: Schedule Trigger → Matrix → Code → ScrapeGraphAI → If → Set → Intercom Customization Examples Change Data Source to Google Patents // In the Code node const base = 'https://patents.google.com/?q='; items.forEach(item => { item.json.searchUrl = ${base}${encodeURIComponent(item.json.keywords)}&oq=${encodeURIComponent(item.json.keywords)}; }); return items; Send Digest via Slack Instead of Intercom // Replace Intercom node with Slack node { "text": 🚀 New Vulnerability-related Patents (${items.length})\n + items.map(i => • <${i.json.link}|${i.json.title}>).join('\n') } Data Output Format The workflow outputs structured JSON data: { "topic": "Memory Safety", "keywords": "memory safety, safe memory allocation, pointer sanitization", "title": "Memory protection for compiled binary code", "publicationNumber": "US20240123456A1", "publicationDate": "2024-03-21", "abstract": "Techniques for enforcing memory safety in compiled software...", "link": "https://patents.google.com/patent/US20240123456A1/en", "source": "USPTO" } Troubleshooting Common Issues Empty Result Set – Ensure that the keywords are specific but not overly narrow; test queries manually on USPTO. ScrapeGraphAI Timeouts – Increase the timeout parameter in the ScrapeGraphAI node or reduce concurrent requests. Performance Tips Limit the keyword matrix to <50 rows to keep weekly runs under 2 minutes. Schedule the workflow during off-peak hours to reduce load on patent-office servers. Pro Tips: Combine this workflow with a vector database (e.g., Pinecone) to create a semantic patent knowledge base. Add a “Merge” node to correlate new patents with existing vulnerability CVE entries. Use a second ScrapeGraphAI node to crawl citation trees and identify emerging technology clusters.
by Khairul Muhtadin
This AI-powered workflow transforms n8n workflow JSON files into publication-ready, SEO-optimized markdown posts for the n8n community. Simply upload your workflow's JSON, and let Google Gemini 2.5 Pro, guided by a LlamaIndex-powered knowledge base of best practices, automatically generate compelling content. Why Use This Workflow? Time Savings: Reduces the time to create a detailed workflow post from over an hour of manual writing to under 2 minutes. Cost Reduction: Eliminates the need for separate AI content subscriptions or outsourcing content creation tasks. Error Prevention: Enforces content quality and structural consistency by using a knowledge base of n8n's official guidelines, minimizing formatting errors. Ideal For n8n Workflow Creators:** To quickly document and share their creations on the community platform without the tedious, time-consuming writing process. Developer Advocates:** To standardize and accelerate the production of technical tutorials and workflow showcases. Content & Marketing Teams:** To streamline the content pipeline for n8n-related blog posts, tutorials, and community engagement initiatives. How It Works Trigger: The process starts when you upload an n8n workflow JSON file via a simple web form. Data Extraction: The workflow automatically extracts the JSON content from the uploaded file. Intelligence Layer: An advanced AI agent, powered by Google Gemini 2.5 Pro, analyzes the structure, nodes, and metadata of your workflow. Knowledge Retrieval: The agent consults a specialized, in-memory knowledge base built from n8n's content guidelines. This knowledge base is created by parsing documents with LlamaIndex and refined with a Cohere Reranker for maximum accuracy. Content Generation: The AI agent synthesizes the technical details from your JSON with the best practices from the knowledge base to write a complete, benefit-driven markdown post. Output & Delivery: The final, polished markdown content is generated as the workflow's output, ready to be copied and pasted into the n8n community platform. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Google Gemini API Key | Essential | Powers the core AI content generation | | LlamaIndex Cloud API Key | Essential | Parses documents for the knowledge base | | Cohere API Key | Optional | Improves knowledge base search results | | Google Drive Account | Optional | For automatically updating the knowledge base from a Google Doc | Installation Steps Import the JSON file to your n8n instance. Configure credentials: Google Gemini: In the "GEmini 2.5 pro" node, create and add your Google Gemini API credential. LlamaIndex: In the three HTTP Request nodes named "Parse Document...", "Monitor Document...", and "Retrieve Parsed...", create an HTTP Header Auth credential. The header name is Authorization and the value is Bearer YOUR_LLAMA_INDEX_API_KEY. Cohere: (Optional) In the "Reranker Cohere" node, create and add your Cohere API credential. Google Drive: (Optional) If you plan to auto-update the knowledge base, configure Google Drive OAuth2 credentials for the "Knowledge Base Updated Trigger" and "Download Knowledge Document" nodes. Update environment-specific values: To use the knowledge base auto-update feature, go to the "Knowledge Base Updated Trigger" node and select the Google Drive file containing your content guidelines. Customize settings: The primary system prompt in the "n8ncreator" agent node can be modified to adjust the tone, style, or structure of the generated content. Test execution: Run the workflow manually and use the form to upload a sample n8n workflow JSON file to verify that all connections work correctly. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Form Trigger | Initiates the workflow via a file upload. | Set the "Input Json Workflow" field to required. | | Langchain Agent | Orchestrates the entire content creation process. | The system prompt contains all instructions for the AI. | | ChatGoogleGemini | Provides the core generative AI capabilities. | Select your Gemini model of choice (e.g., gemini-2.5-pro). | | VectorStoreInMemory | Acts as the agent's knowledge base tool. | Configured to use embeddings from a Google Gemini model. | | HTTPRequest | Interacts with the LlamaIndex API to parse documents. | Set up with LlamaIndex API endpoint and authentication. | Customization Options Basic Adjustments: Change AI Model:** Replace the ChatGoogleGemini node with another LLM node (e.g., OpenAI, Anthropic) to use a different provider. Adjust System Prompt:** Modify the prompt in the "n8ncreator" node to tailor the output for different platforms (e.g., blog, internal wiki) or change the writing style. Advanced Enhancements: Automated Publishing:** Connect the output of the "n8ncreator" node to a Ghost, WordPress, or GitHub node to automatically publish the generated post. Add Web Search:** Equip the Langchain Agent with a web search tool to allow it to fetch live information about new n8n nodes or services. Batch Processing:** Replace the Form Trigger with a Read Binary Files node to process an entire folder of workflow JSON files in a single run. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|---------------------|-------------------| | Execution time | ~1 minute per run | Largely dependent on the Gemini API response time. | | API calls | 1 LLM call per post | Knowledge base updates trigger LlamaIndex/Google calls separately. | | Error handling | Built-in retry logic for document parsing | Add an error workflow path after the "n8ncreator" node to handle AI generation failures. | Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | AI output is generic or incomplete | The input JSON file is invalid or lacks key information (e.g., no node names). | Ensure you are uploading a valid, exported n8n workflow JSON. Verify the workflow has been saved with descriptive node names. | | LlamaIndex parsing fails | The LlamaIndex API key is incorrect or the source document is inaccessible. | Double-check your LlamaIndex API credential. Ensure the Google Doc sharing settings allow access. | | Credential Error | API keys are missing or incorrect for Gemini, LlamaIndex, or Cohere. | Go to the specified nodes and verify that the correct credentials have been created and selected. | Created by: khaisa Studio Category: AI Tags: AI, Content Generation, Google Gemini, LlamaIndex, Automation Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Vinay Gangidi
LOB Underwriting with AI This template ingests borrower documents from OneDrive, extracts text with OCR, classifies each file (ID, paystub, bank statement, utilities, tax forms, etc.), aggregates everything per borrower, and asks an LLM to produce a clear underwriting summary and decision (plus next steps). Good to know AI and OCR usage consume credits (OpenAI + your OCR provider). Folder lookups by name can be ambiguous—use a fixed folderId in production. Scanned image quality drives OCR accuracy; bad scans yield weak text. This flow handles PII—mask sensitive data in logs and control access. Start small: batch size and pagination keep costs/memory sane. How it works Import & locate docs: Manual trigger kicks off a OneDrive folder search (e.g., “LOBs”) and lists files inside. Per-file loop: Download each file → run OCR → classify the document type using filename + extracted text. Aggregate: Combine per-file results into a borrower payload (make BorrowerName dynamic). LLM analysis: Feed the payload to an AI Agent (OpenAI model) to extract underwriting-relevant facts and produce a decision + next steps. Output: Return a human-readable summary (and optionally structured JSON for systems). How to use Start with the Manual Trigger to validate end-to-end on a tiny test folder. Once stable, swap in a Schedule/Cron or Webhook trigger. Review the generated underwriting summary; handle only flagged exceptions (unknown/unreadable docs, low confidence). Setup steps Connect accounts Add credentials for OneDrive, OCR, and OpenAI. Configure inputs In Search a folder, point to your borrower docs (prefer folderId; otherwise tighten the name query). In Get items in a folder, enable pagination if the folder is large. In Split in Batches, set a conservative batch size to control costs. Wire the file path Download a file must receive the current file’s id from the folder listing. Make sure the OCR node receives binary input (PDFs/images). Classification Update keyword rules to match your region/lenders/utilities/tax forms. Keep a fallback Unknown class and log it for review. Combine Replace the hard-coded BorrowerName with: a Set node field, a form input, or parsing from folder/file naming conventions. AI Agent Set your OpenAI model/credentials. Ask the model to output JSON first (structured fields) and Markdown second (readable summary). Keep temperature low for consistent, audit-friendly results. Optional outputs Persist JSON/Markdown to Notion/Docs/DB or write to storage. Customize if needed Doc types: add/remove categories and keywords without touching core logic. Error handling: add IF paths for empty folders, failed downloads, empty OCR, or Unknown class; retry transient API errors. Privacy: redact IDs/account numbers in logs; restrict execution visibility. Scale: add MIME/size filters, duplicate detection, and multi-borrower folder patterns (parent → subfolders).
by Rahul Joshi
Description Automatically extract a structured skill matrix from PDF resumes in a Google Drive folder and store results in Google Sheets. Uses Azure OpenAI (GPT-4o-mini) to analyze predefined tech stacks and filters for relevant proficiency. Fast, consistent insights ready for review. 🔍📊 What This Template Does Fetches all resumes from a designated Google Drive folder (“Resume_store”). 🗂️ Downloads each resume file securely via Google Drive API. ⬇️ Extracts text from PDF files for analysis. 📄➡️📝 Analyzes skills with Azure OpenAI (GPT-4o-mini), rating 1–5 and estimating years. 🤖 Parses and filters to include only skills with proficiency > 2, then updates Google Sheets (“Resume store” → “Sheet2”). ✅ Key Benefits Saves hours on manual resume screening. ⏱️ Produces a consistent, structured skill matrix. 📐 Focuses on intermediate to expert skills for faster shortlisting. 🎯 Centralizes candidate data in Google Sheets for easy sharing. 🗃️ Features Predefined tech stack focus: React, Node.js, Angular, Python, Java, SQL, Docker, Kubernetes, AWS, Azure, GCP, HTML, CSS, JavaScript. 🧰 Proficiency scoring (1–5) and estimated years of experience. 📈 PDF-to-text extraction for robust parsing. 🧾 JSON parsing with error handling for invalid outputs. 🛡️ Manual Trigger to run on demand. ▶️ Requirements n8n instance (cloud or self-hosted). Google Drive access with credentials to the “Resume_store” folder. Google Sheets access to the “Resume store” spreadsheet and “Sheet2” tab. Azure OpenAI with GPT-4o-mini deployed and connected via secure credentials. PDF text extraction enabled within n8n. Target Audience HR and Talent Acquisition teams. 👥 Recruiters and staffing agencies. 🧑💼 Operations teams managing hiring pipelines. 🧭 Tech hiring managers seeking consistent skill insights. 💡 Step-by-Step Setup Instructions Place candidate resumes (PDF) into Google Drive → “Resume_store”. In n8n, add Google Drive and Google Sheets credentials and authorize access. In n8n, add Azure OpenAI credentials (GPT-4o-mini deployment). Import the workflow, assign credentials to each node, and confirm folder/sheet names. Run the Manual Trigger to execute the flow and verify data in “Resume store” → “Sheet2”.
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.
by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Pedro Entringer
🧠 Export Tawk.to Help Center Articles to Google Drive as Markdown Files Transform the way you manage your knowledge base with this fully automated N8N workflow! This automation connects directly to your Tawk.to Help Center, reads all published categories and articles, converts them to Markdown (.md) format, and uploads each file to Google Drive 🔹 Key Benefits 🚀 Complete Extraction Automatically captures all categories and articles from your Tawk.to Help Center, even without direct API integration. 🧩 Automatic Conversion Transforms HTML content into clean Markdown files — perfect for editing, version control, or migration to another CMS. ☁️ Native Google Drive Integration Saves each article with a structured filename, avoids duplicates, and organizes them by category. 🔁 Fully Customizable Easily adapt the workflow to export to Notion, GitHub, Dropbox, or any other platform supported by N8N. 💡 Ideal Use Cases Migrating your Tawk.to Help Center Creating automated content backups Integrating documentation across multiple systems ⚙️ Prerequisites Before running this workflow, make sure you have: An active Tawk.to account with access to your Help Center. A Google Drive account (personal or workspace). Access to N8N (self-hosted or cloud). 🧰 Setup Instructions Import the Workflow Download the JSON file from the provided link or your N8N community instance. In N8N, click Import Workflow and upload the file. Authenticate Google Drive Open the Google Drive node. Click Connect, choose your Google account, and allow access. Configure Output Folder Choose or create a target folder in your Google Drive where articles will be saved. Run the Workflow Click Execute Workflow. The automation will read all Help Center articles, convert them to Markdown, and save them to your Drive.
by Supira Inc.
Overview This template automates invoice processing for teams that currently copy data from PDFs into spreadsheets by hand. It is ideal for small businesses, back-office teams, accounting, and operations who want to reduce manual entry, avoid human error, and never miss a payment deadline. The workflow watches a structured Google Drive folder, performs OCR, converts the text into clean structured JSON with an LLM, and appends one row per invoice into Google Sheets. It preserves a link back to the original file for easy review and audit. Designed for small businesses and back-office teams.** Eliminates manual typing** and reduces errors. Prevents missed due dates** by centralizing data. Works with monthly subfolders like "2025年10月分" (meaning "October 2025"). Keeps a Google Drive link to each invoice file. How It Works The workflow runs on a schedule, scans your Drive folder hierarchy, OCRs the PDFs/images, cleans the text, extracts key fields with an LLM, and appends a row to Google Sheets per invoice. Each step is modular so you can swap services or tweak prompts without breaking the flow. Scheduled trigger** runs on a recurring cadence. Scan the parent folder** in Google Drive. Auto-detect the current-month folder** (e.g., a folder named "2025年10月分" meaning "October 2025"). Download PDFs/images** from the detected folder. Extract text** using the OCR.Space API. Clean noise** and normalize with a Code node. Use an OpenAI model** to extract invoice_date, due_date, client_name, line items, totals, and bank info to JSON. Append one row per invoice** to Google Sheets. Requirements Before you start, make sure you have access to the required services and that your Drive is organized into monthly subfolders so the workflow can find the right files. n8n account.** Google Drive access.** Google Sheets access.** OCR.Space API key** (set as <your_ocr_api_key>). OpenAI / LLM API credential** (e.g., <your_openai_credential_name>). Invoice PDFs organized by month** on Google Drive (e.g., folders like "2025年10月分"). Setup Instructions Import the workflow, replace placeholder credentials and IDs with your own, and enable the schedule. You can also run it manually for testing. The parent-folder query and sheet ID must reflect your environment. Replace <your_google_drive_credential_id> and <your_google_drive_credential_name> with your Google Drive Credential. Adjust the parent folder search query to your invoice repository name. Replace the Sheets document ID <your_google_sheet_id> with your spreadsheet ID. Ensure your OpenAI credential <your_openai_credential_name> is selected. Set your OCR.Space key as <your_ocr_api_key>. Enable the Schedule Trigger** after testing. Customization This workflow is easily extensible. You can adapt folder naming rules, enrich the spreadsheet schema, and expand the AI prompt to extract custom fields specific to your company. It also works beyond invoices, covering receipts, quotes, or purchase orders with minor changes. Change the monthly folder naming rule such as {{$now.setZone("Asia/Tokyo").format("yyyy年MM月")}}分 to match your convention. Modify or extend Google Sheets column mappings as needed. Tune the AI prompt to extract project codes, owner names, or custom fields. Repurpose for receipts, quotes, or purchase orders. Localize date formats and tax calculation rules to your standards.