by Avkash Kakdiya
How it works This workflow runs on a schedule to collect task data from ClickUp and evaluate employee performance using AI. Tasks are analyzed to generate structured summaries, productivity metrics, and KPI scores. JavaScript logic refines and standardizes the results. The final performance data is stored in Google Sheets as a live KPI dashboard. Step-by-step Step 1: Collect ClickUp task data** Schedule Trigger – Starts the workflow automatically at defined intervals. Get many folders – Fetches all folders from the selected ClickUp space. Loop Over Items – Iterates through folders to process tasks sequentially. Get many tasks – Retrieves tasks associated with each folder or list. Step 2: Analyze tasks and compute KPIs** Message a model – Sends task details to an AI model to generate summaries and raw performance metrics. Code in JavaScript – Parses AI output, recalculates KPI scores, and assigns standardized ratings. Step 3: Update employee KPI dashboard** Append or update row in sheet – Writes or updates task and employee performance data in Google Sheets. Why use this? Automates employee performance tracking without manual reporting. Produces consistent KPI scores across all ClickUp tasks. Helps managers quickly identify overdue or high-priority work. Keeps Google Sheets dashboards continuously up to date. Improves visibility into productivity and task execution trends.
by Amine ARAGRAG
This n8n template automates the collection and enrichment of Product Hunt posts using AI and Google Sheets. It fetches new tools daily, translates content, categorizes them intelligently, and saves everything into a structured spreadsheet—ideal for building directories, research dashboards, newsletters, or competitive intelligence assets. Good to know Sticky notes inside the workflow explain each functional block and required configurations. Uses cursor-based pagination to safely fetch Product Hunt data. AI agent handles translation, documentation generation, tech extraction, and function area classification. Category translations are synced with a Google Sheets dictionary to avoid duplicates. All enriched entries are stored in a clean “Tools” sheet for easy filtering or reporting. How it works A schedule trigger starts the workflow daily. Product Hunt posts are retrieved via GraphQL and processed in batches. A code node restructures each product into a consistent schema. The workflow checks if a product already exists in Google Sheets. For new items, the AI agent generates metadata, translations, and documentation. Categories are matched or added to a Google Sheets dictionary. The final enriched product entry is appended or updated in the spreadsheet. Pagination continues until no next page remains. How to use Connect Product Hunt OAuth2, Google Sheets, and OpenAI credentials. Adjust the schedule trigger to your preferred frequency. Optionally expand enrichment fields (tags, scoring, custom classifications). Replace the trigger with a webhook or manual trigger if needed. Requirements Product Hunt OAuth2 credentials Google Sheets account OpenAI (or compatible) API access Customising this workflow Add Slack or Discord notifications for new tools. Push enriched data to Airtable, Notion, or a database. Extend AI enrichment with summaries or SEO fields. Use the Google Sheet as a backend for dashboards or frontend applications.
by Rahul Joshi
Description Automatically extract a structured skill matrix from PDF resumes in a Google Drive folder and store results in Google Sheets. Uses Azure OpenAI (GPT-4o-mini) to analyze predefined tech stacks and filters for relevant proficiency. Fast, consistent insights ready for review. 🔍📊 What This Template Does Fetches all resumes from a designated Google Drive folder (“Resume_store”). 🗂️ Downloads each resume file securely via Google Drive API. ⬇️ Extracts text from PDF files for analysis. 📄➡️📝 Analyzes skills with Azure OpenAI (GPT-4o-mini), rating 1–5 and estimating years. 🤖 Parses and filters to include only skills with proficiency > 2, then updates Google Sheets (“Resume store” → “Sheet2”). ✅ Key Benefits Saves hours on manual resume screening. ⏱️ Produces a consistent, structured skill matrix. 📐 Focuses on intermediate to expert skills for faster shortlisting. 🎯 Centralizes candidate data in Google Sheets for easy sharing. 🗃️ Features Predefined tech stack focus: React, Node.js, Angular, Python, Java, SQL, Docker, Kubernetes, AWS, Azure, GCP, HTML, CSS, JavaScript. 🧰 Proficiency scoring (1–5) and estimated years of experience. 📈 PDF-to-text extraction for robust parsing. 🧾 JSON parsing with error handling for invalid outputs. 🛡️ Manual Trigger to run on demand. ▶️ Requirements n8n instance (cloud or self-hosted). Google Drive access with credentials to the “Resume_store” folder. Google Sheets access to the “Resume store” spreadsheet and “Sheet2” tab. Azure OpenAI with GPT-4o-mini deployed and connected via secure credentials. PDF text extraction enabled within n8n. Target Audience HR and Talent Acquisition teams. 👥 Recruiters and staffing agencies. 🧑💼 Operations teams managing hiring pipelines. 🧭 Tech hiring managers seeking consistent skill insights. 💡 Step-by-Step Setup Instructions Place candidate resumes (PDF) into Google Drive → “Resume_store”. In n8n, add Google Drive and Google Sheets credentials and authorize access. In n8n, add Azure OpenAI credentials (GPT-4o-mini deployment). Import the workflow, assign credentials to each node, and confirm folder/sheet names. Run the Manual Trigger to execute the flow and verify data in “Resume store” → “Sheet2”.
by Kevin Meneses
What this workflow does This workflow scrapes customer reviews from Trustpilot, analyzes them with AI, and keeps both Salesforce and Google Sheets automatically updated with customer sentiment insights. It uses Decodo to reliably extract review content from Trustpilot, processes the text with OpenAI, and orchestrates everything using n8n. 👉 Deocodo How it works (high level) Reads Trustpilot review URLs and Salesforce Account IDs from Google Sheets Scrapes Trustpilot reviews using Decodo Uses AI to summarize sentiment, trends, and key positives/negatives Generates two outputs in parallel Outputs generated 1. Salesforce Account update The workflow updates an existing Salesforce Account by writing the AI-generated sentiment summary into a custom text field (e.g. recent_trend_summary__c). This brings external customer feedback directly into Salesforce, allowing teams to work with real market perception inside the CRM. 2. Google Sheets analytics dataset At the same time, structured review metrics are stored in Google Sheets: Ratings and sentiment distribution Top positive and negative keywords Trend summaries over time This creates a reusable dataset for dashboards and reporting. How to configure it (general) Google Sheets**: add review URLs + Salesforce Account IDs Decodo**: add your API key to scrape Trustpilot reliably OpenAI**: add your API key for AI analysis Salesforce**: Create a custom Text (255) field on Account Connect Salesforce credentials in n8n Only existing Accounts are updated
by Lidia
Who’s it for Teams who want to automatically generate structured meeting minutes from uploaded transcripts and instantly share them in Slack. Perfect for startups, project teams, or any company that collects meeting transcripts in Google Drive. How it works / What it does This workflow automatically turns raw meeting transcripts into well-structured minutes in Markdown and posts them to Slack: Google Drive Trigger – Watches a specific folder. Any new transcript file added will start the workflow. Download File – Grabs the transcript. Prep Transcript – Converts the file into plain text and passes the transcript downstream. Message a Model – Sends the transcript to OpenAI GPT for summarization using a structured system prompt (action items, decisions, N/A placeholders). Make Minutes – Formats GPT’s response into a Markdown file. Slack: Send a message – Posts a Slack message announcing the auto-generated minutes. Slack: Upload a file – Uploads the full Markdown minutes file into the chosen Slack channel. End result: your Slack channel always has clear, standardized minutes right after a meeting. How to set up Google Drive Create a folder where you’ll drop transcript files. Configure the folder ID in the Google Drive Trigger node. OpenAI Add your OpenAI API credentials in the Message a Model node. Select a supported GPT model (e.g., gpt-4o-mini or gpt-4). Slack Connect your Slack account and set the target channel ID in the Slack nodes. Run the workflow and drop a transcript file into Drive. Minutes will appear in Slack automatically. Requirements Google Drive account (for transcript upload) OpenAI API key (for text summarization) Slack workspace (for message posting and file upload) How to customize the workflow Change summary structure*: Adjust the system prompt inside *Message a Model (e.g., shorter summaries, language other than English). Different output format*: Modify *Make Minutes to output plain text, PDF, or HTML instead of Markdown. New destinations**: Add more nodes to send minutes to email, Notion, or Confluence in parallel. Multiple triggers**: Replace Google Drive trigger with Webhook if you want to integrate with Zoom or MS Teams transcript exports. Good to know OpenAI API calls are billed separately. See OpenAI pricing. Files must be text-based (.txt or .md). For PDFs or docs, add a conversion step before summarization. Slack requires the bot user to be a member of the target channel, otherwise you’ll see a not_in_channel error.
by Adam Gałęcki
How it works: This workflow automates comprehensive SEO reporting by: Extracting keyword rankings and page performance from Google Search Console. Gathering organic reach metrics from Google Analytics. Analyzing internal and external article links. Tracking keyword position changes (gains and losses). Formatting and importing all data into Google Sheets reports. Set up steps: Connect Google Services: Authenticate Google Search Console, Google Analytics, and Google Sheets OAuth2 credentials. Configure Source Sheet: Set up a data source Google Sheet with article URLs to analyze. Set Report Sheet: Create or specify destination Google Sheets for reports. Update Date Ranges: Modify date parameters in GSC and GA nodes for your reporting period. Customize Filters: Adjust keyword filters and row limits based on your needs. Test Individual Sections: Each reporting section (keywords, pages, articles, position changes) can be tested independently. The workflow includes separate flows for: Keyword ranking (top 1000). Page ranking analysis. Organic reach reporting. Internal article link analysis. External article link analysis. Position gain/loss tracking.
by iamvaar
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
by Omer Fayyaz
An automated PDF download and management system that collects PDFs from URLs, uploads them to Google Drive, extracts metadata, and maintains a searchable library with comprehensive error handling and status tracking. What Makes This Different: Intelligent URL Validation** - Validates PDF URLs before attempting download, extracting filenames from URLs and generating fallback names when needed, preventing wasted processing time Binary File Handling** - Properly handles PDF downloads as binary files with appropriate headers (User-Agent, Accept), ensuring compatibility with various PDF hosting services Comprehensive Error Handling** - Three-tier error handling: invalid URLs are marked immediately, failed downloads are logged with error messages, and all errors are tracked in a dedicated Error Log sheet Metadata Extraction** - Automatically extracts file ID, size, MIME type, Drive view links, and download URLs from Google Drive responses, creating a complete file record Multiple Trigger Options** - Supports manual execution, scheduled runs (every 12 hours), and workflow-to-workflow calls, making it flexible for different automation scenarios Status Tracking** - Updates source spreadsheet with processing status (Downloaded, Failed, Invalid), enabling easy monitoring and retry logic for failed downloads Key Benefits of Automated PDF Management: Centralized Storage** - All PDFs are automatically organized in a Google Drive folder, making them easy to find and share across your organization Searchable Library** - Metadata is stored in Google Sheets with file links, titles, sources, and download dates, enabling quick searches and filtering Error Recovery** - Failed downloads are logged with error messages, allowing you to identify and fix issues (broken links, access permissions, etc.) and retry Automated Processing** - Schedule-based execution keeps your PDF library updated without manual intervention, perfect for monitoring research sources Integration Ready** - Can be called by other workflows, enabling complex automation chains (e.g., scrape URLs → download PDFs → process content) Bulk Processing** - Processes multiple PDFs in sequence from a spreadsheet, handling large batches efficiently with proper error isolation Who's it for This template is designed for researchers, academic institutions, market research teams, legal professionals, compliance officers, and anyone who needs to systematically collect and organize PDF documents from multiple sources. It's perfect for organizations that need to build research libraries, archive regulatory documents, collect industry reports, maintain compliance documentation, or aggregate academic papers without manually downloading and organizing each file. How it works / What it does This workflow creates a PDF collection and management system that reads PDF URLs from Google Sheets, downloads the files, uploads them to Google Drive, extracts metadata, and maintains a searchable library. The system: Reads Pending PDF URLs - Fetches PDF URLs from Google Sheets "PDF URLs" sheet, processing entries that need to be downloaded Loops Through PDFs - Processes PDFs one at a time using Split in Batches, ensuring proper error isolation and preventing batch failures Prepares Download Info - Extracts filename from URL, decodes URL-encoded characters, validates PDF URL format, and generates fallback filenames with timestamps if needed Validates URL - Checks if URL is valid before attempting download, skipping invalid entries immediately Downloads PDF - Makes HTTP request with proper browser headers, downloads PDF as binary file with 60-second timeout, handles download errors gracefully Verifies Download - Checks if binary data was successfully received, routing to error handling if download failed Uploads to Google Drive - Uploads PDF file to specified Google Drive folder, preserving original filename or using generated name Extracts File Metadata - Extracts file ID, name, MIME type, file size, Drive view link, and download link from Google Drive API response Saves to PDF Library - Appends file metadata to Google Sheets "PDF Library" sheet with title, source, file links, and download timestamp Updates Source Status - Marks processed URLs as "Downloaded", "Failed", or "Invalid" in source sheet for tracking Logs Errors - Records failed downloads and invalid URLs in "Error Log" sheet with error messages for troubleshooting Tracks Completion - Generates completion summary with processing statistics and timestamp Key Innovation: Error-Resilient Processing - Unlike simple download scripts that fail on the first error, this workflow isolates failures, continues processing remaining PDFs, and provides detailed error logging. This ensures maximum success rate and makes troubleshooting straightforward. How to set up 1. Prepare Google Sheets Create a Google Sheet with three tabs: "PDF URLs", "PDF Library", and "Error Log" In "PDF URLs" sheet, create columns: PDF_URL (or pdf_url), Title (optional), Source (optional), Status (optional - will be updated by workflow) Add sample PDF URLs in the PDF_URL column (e.g., direct links to PDF files) The "PDF Library" sheet will be automatically populated with columns: pdfUrl, title, source, fileName, fileId, mimeType, fileSize, driveUrl, downloadUrl, downloadedAt, status The "Error Log" sheet will record: status, errorMessage, pdfUrl, title (for failed downloads) Verify your Google Sheets credentials are set up in n8n (OAuth2 recommended) 2. Configure Google Sheets Nodes Open the "Read Pending PDF URLs" node and select your spreadsheet from the document dropdown Set sheet name to "PDF URLs" Configure the "Save to PDF Library" node: select same spreadsheet, set sheet name to "PDF Library", operation should be "Append or Update" Configure the "Update Source Status" node: same spreadsheet, "PDF URLs" sheet, operation "Update" Configure the "Log Error" node: same spreadsheet, "Error Log" sheet, operation "Append or Update" Test connection by running the "Read Pending PDF URLs" node manually to verify it can access your sheet 3. Set Up Google Drive Folder Create a folder in Google Drive where you want PDFs stored (e.g., "PDF Reports" or "Research Library") Open the "Upload to Google Drive" node Select your Google Drive account (OAuth2 credentials) Choose the drive (usually "My Drive") Select the folder you created from the folder dropdown The filename will be automatically extracted from the URL or generated with timestamp Verify folder permissions allow the service account to upload files Test by manually uploading a file to ensure access works 4. Configure Download Settings The "Download PDF" node is pre-configured with appropriate headers and 60-second timeout If you encounter timeout issues with large PDFs, increase timeout in the node options The User-Agent header is set to mimic a browser to avoid blocking Accept header is set to application/pdf,application/octet-stream,/ for maximum compatibility For sites requiring authentication, you may need to add additional headers or use cookies Test with a sample PDF URL to verify download works correctly 5. Set Up Scheduling & Test The workflow includes Manual Trigger (for testing), Schedule Trigger (runs every 12 hours), and Execute Workflow Trigger (for calling from other workflows) To customize schedule: Open "Schedule (Every 12 Hours)" node and adjust interval (e.g., daily, weekly) For initial testing: Use Manual Trigger, add 2-3 test PDF URLs to your "PDF URLs" sheet Verify execution: Check that PDFs are downloaded, uploaded to Drive, and metadata saved to "PDF Library" Monitor execution logs: Check for any download failures, timeout issues, or Drive upload errors Review Error Log sheet: Verify failed downloads are properly logged with error messages Common issues: Invalid URLs (check URL format), access denied (check file permissions), timeout (increase timeout for large files), Drive quota (check Google Drive storage) Requirements Google Sheets Account** - Active Google account with OAuth2 credentials configured in n8n for reading and writing spreadsheet data Google Drive Account** - Same Google account with OAuth2 credentials and sufficient storage space for PDF files Source Spreadsheet** - Google Sheet with "PDF URLs", "PDF Library", and "Error Log" tabs, properly formatted with required columns Valid PDF URLs** - Direct links to PDF files (not HTML pages that link to PDFs) - URLs should end in .pdf or point directly to PDF content n8n Instance** - Self-hosted or cloud n8n instance with access to external websites (HTTP Request node needs internet connectivity to download PDFs)
by Bhavy Shekhaliya
Overview This n8n template demonstrates how to use AI to automatically analyze WordPress blog content and generate relevant, SEO-optimized tags for WordPress posts. Use cases Automate content tagging for WordPress blogs, maintain consistent taxonomy across large content libraries, save hours of manual tagging work, or improve SEO by ensuring every post has relevant, searchable tags! Good to know The workflow creates new tags automatically if they don't exist in WordPress. Tag generation is intelligent - it avoids duplicates by mapping to existing tag IDs. How it works We fetch a WordPress blog post using the WordPress node with sticky data enabled for testing. The post content is sent to GPT-4.1-mini which analyzes it and generates 5-10 relevant tags using a structured output parser. All existing WordPress tags are fetched via HTTP Request to check for matches. A smart loop processes each AI-generated tag: If the tag already exists, it maps to the existing tag ID If it's new, it creates the tag via WordPress API All tag IDs are aggregated and the WordPress post is updated with the complete tag list. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, schedule, or WordPress webhook for new posts. Modify the "Fetch One WordPress Blog" node to fetch multiple posts or integrate with your publishing workflow. Requirements WordPress site with REST API enabled OpenAI API Customising this workflow Adjust the AI prompt to generate tags specific to your industry or SEO strategy Change the tag count (currently 5-10) based on your needs Add filtering logic to only tag posts in specific categories
by Atik
Automate video transcription and Q&A with async VLM processing that scales from short clips to long recordings. What this workflow does Monitors Google Drive for new files in a specific folder and grabs the file ID on create Automatically downloads the binary to hand off for processing Sends the video to VLM Run for async transcription with a callback URL that posts results back to n8n Receives the transcript JSON via Webhook and appends a row in Google Sheets with the video identifier and transcript data Enables chat Q&A through the Chat Trigger + AI Agent. The agent fetches relevant rows from Sheets and answers only from those segments using the connected chat model Setup Prerequisites: Google Drive and Google Sheets accounts, VLM Run API credentials, OpenAI (or another supported) chat model credentials, n8n instance. Install the verified VLM Run node by searching for VLM Run in the nodes list, then click Install. You can also confirm on npm if needed. After install, it integrates directly for robust async transcription. Quick Setup: Google Drive folder watch Add Google Drive Trigger and choose Specific folder. Set polling to every minute, event to File Created. Connect Drive OAuth2. Download the new file Add Google Drive node with Download. Map {{$json.id}} and save the binary as data. Async transcription with VLM Run Add VLM Run node. Operation: video. Domain: video.transcription. Enable Process Asynchronously and set Callback URL to your Webhook path (for example /transcript-video). Add your VLM Run API key. Webhook to receive results Add Webhook node with method POST and path /transcript-video. This is the endpoint VLM Run calls when the job completes. Use When Last Node Finishes or respond via a Respond node if you prefer. Append to Google Sheets Add Google Sheets node with Append. Point to your spreadsheet and sheet. Map: Video Name → the video identifier from the webhook payload Data → the transcript text or JSON from the webhook payload Connect Google Sheets OAuth2. Chat entry point and Agent Add Chat Trigger to receive user questions. Add AI Agent and connect: a Chat Model (for example OpenAI Chat Model) the Google Sheets Tool to read relevant rows In the Agent system message, instruct: Use the Sheets tool to fetch transcript rows matching the question Answer only from those rows Cite or reference row context as needed Test and activate Upload a sample video to the watched Drive folder. Wait for the callback to populate your sheet. Ask a question through the Chat Trigger and confirm the agent quotes only from the retrieved rows. Activate your template and let it automate the task. How to take this further Team memory:** Ask “What did we decide on pricing last week?” and get the exact clip and answer. Study helper:** Drop classes in, then ask for key points or formulas by topic. Customer FAQ builder:** Turn real support calls into answers your team can reuse. Podcast highlights:** Find quotes, tips, and standout moments from each episode. Meeting catch-up:** Get decisions and action items from any recording, fast. Marketing snippets:** Pull short, social-ready lines from long demos or webinars. Team learning hub:** Grow a searchable video brain that remembers everything. This workflow uses the VLM Run node for scalable, async video transcription and the AI Agent for grounded Q&A from Sheets, giving you a durable pipeline from upload to searchable answers with minimal upkeep.
by Anna Bui
Automatically analyze n8n workflow errors with AI, create support tickets, and send detailed Slack notifications Perfect for development teams and businesses that need intelligent error handling with automated support workflows. Never miss critical workflow failures again! How it works Error Trigger captures any workflow failure in your n8n instance AI Debugger analyzes the error using structured reasoning to identify root causes Clean Data transforms AI analysis into organized, actionable information Create Support Ticket automatically generates a detailed ticket in FreshDesk Merge combines ticket data with AI analysis for comprehensive reporting Generate Slack Alert creates rich, formatted notifications with all context Send to Team delivers instant alerts to your designated Slack channel How to use Replace FreshDesk credentials with your helpdesk system API Configure Slack channel for your team notifications Customize AI analysis prompts for your specific error types Set up as global error handler for all your critical workflows Requirements FreshDesk account (or compatible ticketing system) Slack workspace with bot permissions OpenAI API access for AI analysis n8n Cloud or self-hosted with AI nodes enabled Good to know OpenAI API calls cost approximately $0.01-0.03 per error analysis Works with any ticketing system that supports REST API Can be triggered by webhooks from external monitoring tools Slack messages use rich formatting for mobile-friendly alerts Need Help? Join the Discord or ask in the Forum! Happy Monitoring!
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Lookio assistant—which you've connected to your own trusted knowledge base of uploaded documents—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Lookio assistant. This assistant, which you have pre-configured by uploading your own documents, finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-4o). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Building your own RAG pipeline VS using Lookio or alternative tools Building a RAG system natively within n8n offers deep customization, but it requires managing a toolchain for data processing, text chunking, and retrieval optimization. An alternative is to use a managed service like Lookio, which provides RAG functionality through an API. This approach abstracts the backend infrastructure for document ingestion and querying, trading the granular control of a native build for a reduction in development and maintenance tasks. Implementing the template 1. Set up your Lookio assistant (Prerequisite): Lookio is a platform for building intelligent assistants that leverage your organization's documents as a dedicated knowledge base. First, sign up at Lookio. You'll get 50 free credits to get started. Upload the documents you want to use as your knowledge base. Create a new assistant and then generate an API key. Copy your Assistant ID and your API Key for the next step. 2. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. In the Query Lookio Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend using a Bearer Token credential). 3. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.