by Trung Tran
Multi-Agent Architecture Free Bootstrap Template for Beginners Free template to learn and reuse a multi-agent architecture in n8n. The company metaphor: a CEO (orchestrator) delegates to Marketing, Operations, Finance to produce a short sales-season plan, export it to PDF, and share it. Who’s it for Builders who want a clear, minimal pattern for multi-agent orchestration in n8n. Teams demoing/teaching agent collaboration with one coordinator + three specialists. Anyone needing a repeatable template to generate plans from multiple “departments”. How it works / What it does Trigger (Manual) — Click Execute workflow to start. Edit Fields — Provide brief inputs (company, products, dates, constraints, channels, goals). CEO Agent (Orchestrator) — Reads the brief, calls 3 tool agents once, merges results, resolves conflicts. Marketing Agent — Proposes top campaigns + channels + content calendar. Operations Agent — Outlines inventory/staffing readiness, fulfillment steps, risks. Finance Agent — Suggests pricing/discounts, budget split, targets. Compose Document — CEO produces Markdown; node converts to Google Doc → PDF. Share — Upload the PDF to Slack (or Drive) for review. Outputs Markdown plan** with sections (Summary, Timeline, Marketing, Ops, Pricing, Risks, Next Actions). Compact JSON** for automation (campaigns, budget, dates, actions). PDF** file for stakeholders. How to set up Add credentials OpenAI (or your LLM provider) for all agents. Google (Drive/Docs) to create the document and export PDF. Slack (optional) to upload/share the PDF. Map nodes (suggested) When clicking ‘Execute workflow’ → Edit Fields (form with: company, products, audience, start_date, end_date, channels, constraints, metrics). CEO Agent (AI Tool Node) → calls Marketing Agent, Operations Agent, Finance Agent (AI Tool Nodes). Configure metadata (doc title from company + window). Create document file (Google Docs API) with CEO Markdown. Convert to PDF (export). Upload a file (Slack) to share. Prompts (drop-in) CEO (system): orchestrate 3 tools; request concise JSON+Markdown; merge & resolve; output sections + JSON. Marketing / Operations / Finance (system): each returns a small JSON per its scope (campaigns/calendar; staffing/steps/risks; discounts/budget/targets). Test — Run once; verify the PDF and Slack message. Requirements n8n (current version with AI Tool Node). LLM credentials (e.g., OpenAI). Google credentials for Docs/Drive (to create & export). Optional Slack bot token for file uploads. How to customize the workflow Swap roles**: Replace departments (e.g., Product, Legal, Support) or add more tool agents. Change outputs: Export to **DOCX/HTML/Notion; add a cover page; attach brand styles. Approval step: Insert **Slack “Send & Wait” before PDF generation for review/edits. Data grounding**: Add RAG (Sheets/DB/Docs) so agents cite inventory, pricing, or past campaign KPIs. Automation JSON**: Extend the schema to match your CRM/PM tool and push next_actions into Jira/Asana. Scheduling: Replace manual trigger with a **cron (weekly/monthly planning). Localization**: Add a Translation agent or set language via input field. Guardrails**: Add length limits, cost caps (max tokens), and validation on agent JSON.
by Muhammad Ali
Description How it works This powerful workflow helps businesses and freelancers automatically manage invoices received on WhatsApp. It detects new messages, downloads attached invoices, extracts key data using OCR (Optical Character Recognition), summarizes the details with AI, updates Google Sheets for record-keeping, saves files to Google Drive, and instantly replies with a clean summary message all without manual effort. Perfect for small businesses, agencies, accountants, and freelancers who regularly receive invoices via WhatsApp. Say goodbye to manual data entry and hello to effortless automation. Set up steps Setup takes around 10–15 minutes: Connect your WhatsApp Cloud API to trigger incoming messages. Add your OCR.Space API key to extract invoice text. Link your Google Sheets and Google Drive accounts for data logging and storage. Enter your OpenAI API key for AI-based summarization. Import the template, test once, and you’re ready to automate your invoice workflow. Why use this workflow Save hours of manual data entry Keep all invoices safely stored and organized in Drive Get instant summaries directly in WhatsApp Improve efficiency for client billing, and expense tracking.
by Jameson Kanakulya
Automated Content Page Generator with AI, Tavily Research, and Supabase Storage > ⚠️ Self-Hosted Disclaimer: This template requires self-hosted n8n installation and external service credentials (OpenAI, Tavily, Google Drive, NextCloud, Supabase). It cannot run on n8n Cloud due to dependency requirements. Overview Transform simple topic inputs into professional, multi-platform content automatically. This workflow combines AI-powered content generation with intelligent research and seamless storage integration to create website content, blog articles, and landing pages optimized for different audiences. Key Features Automated Research**: Uses Tavily's advanced search to gather relevant, up-to-date information Multi-Platform Content**: Generates optimized content for websites, blogs, and landing pages Image Management**: Downloads from Google Drive and uploads to NextCloud with public URL generation Database Integration**: Stores all content in Supabase for easy retrieval Error Handling**: Built-in error management workflow for reliability Content Optimization**: AI-driven content strategy with trend analysis and SEO optimization Required Services & APIs Core Services n8n**: Self-hosted instance (required) OpenAI**: GPT-4 API access for content generation Tavily**: Research API for content discovery Google Drive**: Image storage and retrieval Google Sheets**: Content input and workflow triggering NextCloud**: Image hosting and public URL generation Supabase**: Database storage for generated content Setup Instructions ## Prerequisites Before setting up this workflow, ensure you have: Self-hosted n8n installation API credentials for all required services Database table created in Supabase ## Step 1: Service Account Configuration OpenAI Setup Create an OpenAI account at platform.openai.com Generate API key from the API Keys section In n8n, create new OpenAI credentials using your API key Test connection to ensure GPT-4 access Tavily Research Setup Sign up at tavily.com Get your API key from the dashboard Add Tavily credentials in n8n Configure search depth to "advanced" for best results Google Services Setup Create Google Cloud Project Enable Google Drive API and Google Sheets API Create OAuth2 credentials Configure Google Drive and Google Sheets credentials in n8n Share your input spreadsheet with the service account NextCloud Setup Install NextCloud or use hosted solution Create application password for API access Configure NextCloud credentials in n8n Create /images/ folder for content storage Supabase Setup Create Supabase project at supabase.com Create table with the following structure: CREATE TABLE works ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, image_url TEXT, category TEXT, created_at TIMESTAMP DEFAULT NOW() ); Get project URL and service key from settings Configure Supabase credentials in n8n ## Step 2: Google Sheets Input Setup Create a Google Sheets document with the following columns: TITLE**: Topic or title for content generation IMAGE_URL**: Google Drive sharing URL for associated image Example format: TITLE | IMAGE_URL AI Chatbot Implementation | https://drive.google.com/file/d/your-file-id/view Digital Marketing Trends 2024 | https://drive.google.com/file/d/another-file-id/view ## Step 3: Workflow Import and Configuration Import the workflow JSON into your n8n instance Configure all credential connections: Link OpenAI credentials to "OpenAI_GPT4_Model" node Link Tavily credentials to "Tavily_Research_Agent" node Link Google credentials to "Google_Sheets_Trigger" and "Google_Drive_Image_Downloader" nodes Link NextCloud credentials to "NextCloud_Image_Uploader" and "NextCloud_Public_URL_Generator" nodes Link Supabase credentials to "Supabase_Content_Storage" node Update the Google Sheets Trigger node: Set your spreadsheet ID in the documentId field Configure polling frequency (default: every minute) Test each node connection individually before activating ## Step 4: Error Handler Setup (Optional) The workflow references an error handler workflow (GWQ4UI1i3Z0jp3GF). Either: Create a simple error notification workflow with this ID Remove the error handling references if not needed Update the workflow ID to match your error handler ## Step 5: Workflow Activation Save all node configurations Test the workflow with a sample row in your Google Sheet Verify content generation and storage in Supabase Activate the workflow for continuous monitoring How It Works ## Workflow Process Trigger: Google Sheets monitors for new rows with content topics Research: Tavily searches for 3 relevant articles about the topic Content Generation: AI agent creates multi-platform content (website, blog, landing page) Content Cleaning: Text processing removes formatting artifacts Image Processing: Downloads image from Google Drive, uploads to NextCloud URL Generation: Creates public sharing links for images Storage: Saves final content package to Supabase database ## Content Output Structure Each execution generates: Optimized Title**: SEO-friendly, platform-appropriate headline Multi-Platform Content**: Website content (professional, authority-building) Blog content (educational, SEO-optimized) Landing page content (conversion-focused) Category Classification**: Automated content categorization Image Assets**: Processed and publicly accessible images Customization Options ## Content Strategy Modification Edit the AI agent's system message to change content style Adjust character limits for different platform requirements Modify category classifications for your industry ## Research Parameters Change Tavily search depth (basic, advanced) Adjust number of research sources (1-10) Modify search topic focus ## Storage Configuration Update Supabase table structure for additional fields Change NextCloud folder organization Modify image naming conventions Troubleshooting ## Common Issues Workflow not triggering: Check Google Sheets permissions Verify polling frequency settings Ensure spreadsheet format matches requirements Content generation errors: Verify OpenAI API key and credits Check GPT-4 model access Review system message formatting Image processing failures: Confirm Google Drive sharing permissions Check NextCloud storage space and permissions Verify file formats are supported Database storage issues: Validate Supabase table structure Check API key permissions Review field mapping in storage node ## Performance Optimization Adjust polling frequency based on your content volume Monitor API usage to stay within limits Consider batch processing for high-volume scenarios Support and Updates This template is designed for self-hosted n8n environments and requires technical setup. For issues: Check n8n community forums Review service-specific documentation Test individual nodes in isolation Monitor execution logs for detailed error information
by Roshan Ramani
🛒 Smart Telegram Shopping Assistant with AI Product Recommendations Workflow Overview Target User Role: E-commerce Business Owners, Affiliate Marketers, Customer Support Teams Problem Solved: Businesses need an automated way to help customers find products on Telegram without manual intervention, while providing intelligent recommendations that increase conversion rates. Opportunity Created: Transform any Telegram channel into a smart shopping assistant that can handle both product queries and customer conversations automatically. What This Workflow Does This workflow creates an intelligent Telegram bot that: 🤖 Automatically detects** whether users are asking about products or just chatting 🛒 Scrapes Amazon** in real-time to find the best matching products 🎯 Uses AI to analyze and rank** products based on price, ratings, and user needs 📱 Delivers perfectly formatted** recommendations optimized for Telegram 💬 Handles casual conversations** professionally when users aren't shopping Real-World Use Cases E-commerce Support**: Reduce customer service workload by 70% Affiliate Marketing**: Automatically recommend products with tracking links Telegram Communities**: Add shopping capabilities to existing channels Product Discovery**: Help customers find products they didn't know existed Key Features & Benefits 🧠 Intelligent Intent Detection Uses Google Gemini AI to understand user messages Automatically routes to product search or conversation mode Handles multiple languages and casual typing styles 🛒 Real-Time Product Data Integrates with Apify's Amazon scraper for live data Fetches prices, ratings, reviews, and product details Processes up to 10 products per search instantly 🎯 AI-Powered Recommendations Analyzes multiple products simultaneously Ranks by relevance, value, and user satisfaction Provides top 5 personalized recommendations with reasoning 📱 Telegram-Optimized Output Perfect formatting with emojis and markdown Respects character limits for mobile viewing Includes direct purchase links for easy buying Setup Requirements Required Credentials Telegram Bot Token - Free from @BotFather Google Gemini API Key - Free tier available at AI Studio Apify API Token - Free tier includes 100 requests/month Required n8n Nodes @n8n/n8n-nodes-langchain (for AI functionality) Built-in Telegram, HTTP Request, and Code nodes Quick Setup Guide Step 1: Telegram Bot Creation Message @BotFather on Telegram Create new bot with /newbot command Copy the bot token to your credentials Step 2: AI Configuration Sign up for Google AI Studio Generate API key for Gemini Add credentials to all three AI model nodes Step 3: Product Scraping Setup Register for free Apify account Get API token from dashboard Add token to "Amazon Product Scraper" node Step 4: Activation Import workflow JSON Add your credentials Activate the Telegram Trigger Test with a product query! Workflow Architecture 📱 Message Entry Point Telegram Trigger receives all messages 🧹 Query Preprocessing Cleans and normalizes user input for better search results 🤖 AI Intent Classification Determines if message is product-related or conversational 🔀 Smart Routing Directs to appropriate workflow path based on intent 💬 Conversation Path Handles greetings, questions, and general support 🛒 Product Search Path Scrapes Amazon → Processes data → AI analysis → Recommendations 📤 Optimized Delivery Formats and sends responses back to Telegram Customization Opportunities Easy Modifications Multiple Marketplaces**: Add eBay, Flipkart, or local stores Product Categories**: Specialize for electronics, fashion, etc. Language Support**: Translate for different markets Branding**: Customize responses with your brand voice Advanced Extensions Price Monitoring**: Set up alerts for price drops User Preferences**: Remember customer preferences Analytics Dashboard**: Track popular products and queries Affiliate Integration**: Add commission tracking links Success Metrics & ROI Performance Benchmarks Response Time**: 3-5 seconds for product queries Accuracy**: 90%+ relevant product matches User Satisfaction**: 85%+ positive feedback in testing Business Impact Reduced Support Costs**: Automate 70% of product inquiries Increased Conversions**: Personalized recommendations boost sales 24/7 Availability**: Never miss a customer inquiry Scalability**: Handle unlimited concurrent users Workflow Complexity Intermediate Level - Requires API setup but includes detailed instructions. Perfect for users with basic n8n experience who want to create something powerful.
by Intuz
This n8n template from Intuz provides a complete solution to automate a powerful, AI-driven 'Chat with your PDF' bot on Telegram. It uses Retrieval-Augmented Generation (RAG) to allow users to upload documents, which are then indexed into a vector database, enabling the bot to answer questions based only on the provided content. Who's this workflow for? Researchers & Students Legal & Compliance Teams Business Analysts & Financial Advisors Anyone needing to quickly find information within large documents How it works This workflow has two primary functions: indexing a new document and answering questions about it. 1. Uploading & Indexing a Document: A user sends a PDF file to the Telegram bot. n8n downloads the document, extracts the text, and splits it into small, manageable chunks. Using Google Gemini, each text chunk is converted into a numerical representation (an "embedding"). These embeddings are stored in a Pinecone vector database, making the document's content searchable. The bot sends a confirmation message to the user that the document has been successfully saved. 2. Asking a Question (RAG): A user sends a regular text message (a question) to the bot. n8n converts the user's question into an embedding using Google Gemini. It then searches the Pinecone database to find the most relevant text chunks from the uploaded PDF that match the question. These relevant chunks (the "context") are sent to the Gemini chat model along with the original question. Gemini generates a new, accurate answer based only on the provided context and sends it back to the user in Telegram. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Telegram Account: A Telegram bot created via the BotFather, along with its API token. 3. Google Gemini AI Account: A Google Cloud account with the Vertex AI API enabled and an associated API Key. 4. Pinecone Account: A Pinecone account with an API key. You must have a vector index created in Pinecone. For use with Google Gemini's embedding-001 model, the index must be configured with 768 dimensions. Setup Instructions 1. Telegram Configuration: In the "Telegram Message Trigger" node, create a new credential and add your Telegram bot's API token. Do the same for the "Telegram Response" and "Telegram Response about Database" nodes. 2. Pinecone Configuration: In both "Pinecone Vector Store" nodes, create a new credential and add your Pinecone API key. In the "Index" field of both nodes, enter the name of your pre-configured Pinecone index (e.g., telegram). 3. Google Gemini Configuration: In all three Google Gemini nodes (Embeddings Google Gemini, Embeddings Google Gemini1, and Google Gemini Chat Model), create a new credential and add your Google Gemini (Palm) API key. 4. Activate and Use: Save the workflow and toggle the "Active" switch to ON. To use: First, send a PDF document to your bot. Wait for the confirmation message. Then, you can start asking questions about the content of that PDF. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Trung Tran
📘 Code of Conduct Q&A Slack Chatbot with RAG Powered > Empower employees to instantly access and understand the company’s Code of Conduct via a Slack chatbot, powered by Retrieval-Augmented Generation (RAG) and LLMs. 🧑💼 Who’s it for This workflow is designed for: HR and compliance teams** to automate policy-related inquiries Employees** who want quick answers to Code of Conduct questions directly inside Slack Startups or enterprises** that need internal compliance self-service tools powered by AI ⚙️ How it works / What it does This RAG-powered Slack chatbot answers user questions based on your uploaded Code of Conduct PDF using GPT-4 and embedded document chunks. Here's the flow: Receive Message from Slack: A webhook triggers when a message is posted in Slack. Check if it’s a valid query: Filters out non-user messages (e.g., bot mentions). Run Agent with RAG: Uses GPT-4 with Query Data Tool to retrieve relevant document chunks. Returns a well-formatted, context-aware answer. Send Response to Slack: Fetches user info and posts the answer back in the same channel. Document Upload Flow: HR can upload the PDF Code of Conduct file. It’s parsed, chunked, embedded using OpenAI, and stored for future query retrieval. A backup copy is saved to Google Drive. 🛠️ How to set up Prepare your environment: Slack Bot token & webhook configured (Sample slack app manifest: https://wisestackai.s3.ap-southeast-1.amazonaws.com/slack_bot_manifest.json) OpenAI API key (for GPT-4 & embedding) Google Drive credentials (optional for backup) Upload the Code of Conduct PDF: Use the designated node to upload your document (Sample file: https://wisestackai.s3.ap-southeast-1.amazonaws.com/20220419-ingrs-code-of-conduct-policy-en.pdf) This triggers chunking → embedding → data store. Deploy the chatbot: Host the webhook and connect it to your Slack app. Share the command format with employees (e.g., @CodeBot Can I accept gifts from partners?) Monitor and iterate: Improve chunk size or embed model if queries aren’t accurate. Review unanswered queries to enhance coverage. 📋 Requirements n8n (Self-hosted or Cloud) Slack App (with chat:write, users:read, commands) OpenAI account (embedding + GPT-4 access) Google Drive integration (for backups) Uploaded Code of Conduct in PDF format 🧩 How to customize the workflow | What to Customize | How to Do It | |-----------------------------|------------------------------------------------------------------------------| | 🔤 Prompt style | Edit the System & User prompts inside the Code Of Conduct Agent node | | 📄 Document types | Upload additional policy PDFs and tag them differently in metadata | | 🤖 Agent behavior | Tune GPT temperature or replace with different LLM | | 💬 Slack interaction | Customize message formats or trigger phrases | | 📁 Data Store engine | Swap to Pinecone, Weaviate, Supabase, etc. depending on use case | | 🌐 Multilingual support | Preprocess text and support locale detection via Slack metadata |
by Jitesh Dugar
Transform college admissions from an overwhelming manual process into an intelligent, efficient, and equitable system that analyzes essays, scores applicants holistically, and identifies top candidates—saving 40+ hours per week while improving decision quality. 🎯 What This Workflow Does Automates comprehensive application review with AI-powered analysis: 📝 Application Intake - Captures complete college applications via Jotform 📚 AI Essay Analysis - Deep analysis of personal statements and supplemental essays for: Writing quality, authenticity, and voice AI-generated content detection Specificity and research quality Red flags (plagiarism, inconsistencies, generic writing) 🎯 Holistic Review AI - Evaluates applicants across five dimensions: Academic strength (GPA, test scores, rigor) Extracurricular profile (leadership, depth, impact) Personal qualities (character, resilience, maturity) Institutional fit (values alignment, contribution potential) Diversity contribution (unique perspectives, experiences) 🚦 Smart Routing - Automatically categorizes and routes applications: Strong Admit (85-100): Slack alert → Director email → Interview invitation → Fast-track Committee Review (65-84): Detailed analysis → Committee discussion → Human decision Standard Review (<65): Acknowledgment → Human verification → Standard timeline 📊 Comprehensive Analytics - All applications logged with scores, recommendations, and outcomes ✨ Key Features AI Essay Analysis Engine Writing Quality Assessment**: Grammar, vocabulary, structure, narrative coherence Authenticity Detection**: Distinguishes genuine voice from AI-generated content (GPT detectors) Content Depth Evaluation**: Self-awareness, insight, maturity, storytelling ability Specificity Scoring**: Generic vs tailored "Why Us" essays with research depth Red Flag Identification**: Plagiarism indicators, privilege blindness, inconsistencies, template writing Thematic Analysis**: Core values, motivations, growth narratives, unique perspectives Holistic Review Scoring (0-100 Scale) Academic Strength (35%)**: GPA in context, test scores, course rigor, intellectual curiosity Extracurricular Profile (25%)**: Quality over quantity, leadership impact, commitment depth Personal Qualities (20%)**: Character, resilience, empathy, authenticity, self-awareness Institutional Fit (15%)**: Values alignment, demonstrated interest, contribution potential Diversity Contribution (5%)**: Unique perspectives, life experiences, background diversity Intelligent Candidate Classification Admit**: Top 15% - clear admit, exceptional across multiple dimensions Strong Maybe**: Top 15-30% - competitive, needs committee discussion Maybe**: Top 30-50% - solid but not standout, waitlist consideration Deny**: Below threshold - does not meet competitive standards (always human-verified) Automated Workflows Priority Candidates**: Immediate Slack alerts, director briefs, interview invitations Committee Cases**: Detailed analysis packets, discussion points, voting workflows Standard Processing**: Professional acknowledgments, timeline communications Interview Scheduling**: Automated invitations with candidate-specific questions 💼 Perfect For Selective Colleges & Universities**: 15-30% acceptance rates, holistic review processes Liberal Arts Colleges**: Emphasis on essays, personal qualities, institutional fit Large Public Universities**: Processing thousands of applications efficiently Graduate Programs**: MBA, law, medical school admissions Scholarship Committees**: Evaluating merit and need-based awards Honors Programs**: Identifying top candidates for selective programs Private High Schools**: Admissions teams with holistic processes 🎓 Admissions Impact Efficiency & Productivity 40-50 hours saved per week** on initial application review 70% faster** essay evaluation with AI pre-analysis 3x more applications** processed per reader Zero data entry** - all information auto-extracted Consistent evaluation** across thousands of applications Same-day turnaround** for top candidate identification Decision Quality Improvements Objective scoring** reduces unconscious bias Consistent criteria** applied to all applicants Essay authenticity checks** catch AI-written applications Holistic view** considers all dimensions equally Data-driven insights** inform committee discussions Fast-track top talent** before competitors Equity & Fairness Standardized evaluation** ensures fair treatment First-generation flagging** provides context Socioeconomic consideration** in holistic scoring Diverse perspectives valued** in diversity score Bias detection** in essay analysis Audit trail** for compliance and review Candidate Experience Instant acknowledgment** of application receipt Professional communication** at every stage Clear timelines** and expectations Interview invitations** for competitive candidates Respectful process** for all applicants regardless of outcome 🔧 What You'll Need Required Integrations Jotform** - Application intake forms Create your form for free on JotForm using this link OpenAI API** - GPT-4o for analysis (~$0.15-0.25 per application) Gmail/Outlook** - Applicant and staff communication (free) Google Sheets** - Application database and analytics (free) Optional Integrations Slack** - Real-time alerts for strong candidates ($0-8/user/month) Google Calendar** - Interview scheduling automation (free) Airtable** - Advanced application tracking (alternative to Sheets) Applicant Portal Integration** - Status updates via API CRM Systems** - Slate, TargetX, Salesforce for higher ed 🚀 Setup Guide (3-4 Hours) Step 1: Create Application Form (60 min) Build comprehensive Jotform with sections: Basic Information Full name, email, phone High school, graduation year Intended major Academic Credentials GPA (weighted/unweighted, scale) SAT score (optional) ACT score (optional) Class rank (if available) Academic honors Essays (Most Important!) Personal statement (650 words max) "Why Our College" essay (250-300 words) Supplemental prompts (program-specific) Activities & Achievements Extracurricular activities (list with hours/week, years) Leadership positions (with descriptions) Honors and awards Community service hours Work experience Additional Information First-generation college student (yes/no) Financial aid needed (yes/no) Optional: demographic information Optional: additional context Step 2: Import n8n Workflow (15 min) Copy JSON from artifact n8n: Workflows → Import → Paste Includes all nodes + 7 detailed sticky notes Step 3: Configure OpenAI API (20 min) Get API key: https://platform.openai.com/api-keys Add to both AI nodes (Essay Analysis + Holistic Review) Model: gpt-4o (best for nuanced analysis) Temperature: 0.3 (consistency with creativity) Test with sample application Cost: $0.15-0.25 per application (essay analysis + holistic review) Step 4: Customize Institutional Context (45 min) Edit AI prompts to reflect YOUR college: In Holistic Review Prompt, Update: College name and type Acceptance rate Average admitted student profile (GPA, test scores) Institutional values and culture Academic programs and strengths What makes your college unique Desired student qualities In Essay Analysis Prompt, Add: Specific programs to look for mentions of Faculty names applicants should reference Campus culture keywords Red flags specific to your institution Step 5: Setup Email Communications (30 min) Connect Gmail/Outlook OAuth Update all recipient addresses: admissions-director@college.edu admissions-committee@college.edu Email addresses for strong candidate alerts Customize email templates: Add college name, logo, branding Update contact information Adjust tone to match institutional voice Include decision release dates Add applicant portal links Step 6: Configure Slack Alerts (15 min, Optional) Create channel: #admissions-strong-candidates Add webhook URL or bot token Test with mock strong candidate Customize alert format and recipients Step 7: Create Admissions Database (30 min) Google Sheet with columns:
by Swot.AI
Description This workflow lets you upload a PDF document and automatically analyze it with AI. It extracts the text, summarizes the content, flags key clauses or risks, and then delivers the results via Gmail while also storing them in Google Sheets for tracking. It’s designed for legal, compliance, or contract review use cases, but can be adapted for any document analysis scenario. Test it here: PDF Document Assistant 🔹 Instructions / Setup Webhook Input Upload a PDF document by sending it to the webhook URL. Extract from File The workflow extracts text from the uploaded PDF. Pre-processing (Code Node) Cleans and formats extracted text to remove unwanted line breaks or artifacts. Basic LLM Chain (OpenAI) Summarizes or restructures document content using OpenAI. Adjust the prompt inside to fit your analysis needs (summary, risk flags, clause extraction). Post-processing (Code Node) Further structures the AI output into a clean format (JSON, HTML, or plain text). AI Agent (OpenAI) Runs deeper analysis, answers questions, and extracts insights. Gmail Sends the results to a recipient. Configure Gmail credentials and set your recipient address. Google Sheets Appends results to a Google Sheet for record-keeping or audits. Respond to Webhook Sends a quick acknowledgment back to confirm the document was received. 🔹 Credentials Needed OpenAI API key (for Chat Model + Agent) Gmail account (OAuth2) Google Sheets account (OAuth2) 🔹 Example Use Case Upload a contract PDF → workflow extracts clauses → AI flags risky terms → Gmail sends formatted summary → results stored in Google Sheets.
by Juan Carlos Cavero Gracia
This workflow transforms any video you drop into a Google Drive folder into a ready-to-publish YouTube upload. It analyzes the video with AI to craft 3 high-CTR title ideas, 3 long SEO-friendly descriptions (with timestamps), and 10–15 optimized tags. It then generates 4 thumbnail options using your face and lets you pick your favorite before auto-publishing to YouTube via Upload-Post. Who Is This For? YouTube Creators & Editors:** Ship videos with winning titles, thumbnails, and SEO in minutes. Agencies & Media Teams:** Standardize output and speed across channels and clients. Founders & Solo Makers:** Maintain consistent publishing with minimal manual work. What Problem Does It Solve? Producing SEO metadata and high-performing thumbnails is slow and inconsistent. This flow: Generates High-CTR Options:** 3 distinct angles for title/description/tags. Creates Thumbnails with Your Face:** 4 options ready for review in one pass. Auto-Publishes Safely:** Human selection gates reduce risk before going live. How It Works Google Drive Trigger: Watches a folder for new video files. AI Video Analysis (Gemini): Produces an in-depth Spanish description and timestamps. Concept Generation: Returns 3 JSON concepts (title, thumbnail prompt, description, tags). User Review #1: Pick your favorite concept in a simple form. Thumbnail Generation (fal.ai): Creates 4 thumbnails using your face (provided image URL). User Review #2: Choose the best thumbnail. Upload to YouTube (Upload-Post): Publishes the video with your chosen title, description, tags, and thumbnail. Setup Credentials (all offer free trials, no credit card required): Google Gemini (chat/vision for analysis) fal.ai API (thumbnail generation) Upload-Post ( Connect your Youtube channel and generate api keys) Google Drive OAuth (folder watch + file download) Provide Your Face Image URL(s): Used by fal.ai to integrate your face into thumbnails. Select the Google Drive Folder: Where you’ll drop videos to process. Pick & Publish: Use the built-in forms to choose concept and thumbnail. Requirements Accounts:** Google (Drive + Gemini), fal.ai, Upload-Post, n8n. API Keys:** Gemini, fal.ai; Upload-Post credentials; Google Drive OAuth. Assets:** At least one clear face image for thumbnails. Features Three SEO Angles:** Distinct title/description sets to test different intents. Rich Descriptions with Timestamps:** Ready for YouTube SEO and viewer navigation. Face-Integrated Thumbnails:** 4 options aligned with the selected title. Human-in-the-Loop Controls:** Approve concepts and thumbnails before publishing. Auto-Publish via Upload-Post:** One click to push live to YouTube. Start Free:** All API calls can run on free trials, no credit card required. Video demo https://www.youtube.com/watch?v=EOOgFveae-U
by Rahul Joshi
Automatically detect, classify, and document GitHub API errors using AI. This workflow connects GitHub, OpenAI (GPT-4o), Airtable, Notion, and Slack to build a real-time, searchable API error knowledge base — helping engineering and support teams respond faster, stay aligned, and maintain clean documentation. ⚙️📘💬 🚀 What This Template Does 1️⃣ Triggers on new or updated GitHub issues (API-related). 🪝 2️⃣ Extracts key fields (title, body, repo, and link). 📄 3️⃣ Classifies issues using OpenAI GPT-4o, identifying error type, category, root cause, and severity. 🤖 4️⃣ Validates & parses AI output into structured JSON format. ✅ 5️⃣ Creates or updates organized FAQ-style entries in Airtable for quick lookup. 🗂️ 6️⃣ Logs detailed entries into Notion, maintaining an ongoing issue knowledge base. 📘 7️⃣ Notifies the right Slack team channel (DevOps, Backend, API, Support) with concise summaries. 💬 8️⃣ Tracks & prevents duplicates, keeping your error catalog clean and auditable. 🔄 💡 Key Benefits ✅ Converts unstructured GitHub issues into AI-analyzed documentation ✅ Centralizes API error intelligence across teams ✅ Reduces time-to-resolution for recurring issues ✅ Maintains synchronized records in Airtable & Notion ✅ Keeps DevOps and Support instantly informed through Slack alerts ✅ Fully automated, scalable, and low-cost using GPT-4o ⚙️ Features Real-time GitHub trigger for API or backend issues GPT-4o-based AI classification (error type, cause, severity, confidence) Smart duplicate prevention logic Bi-directional sync to Airtable + Notion Slack alerts with contextual AI insights Modular design — easy to extend with Jira, Teams, or email integrations 🧰 Requirements GitHub OAuth2 credentials OpenAI API key (GPT-4o recommended) Airtable Base & Table IDs (with fields like Error Code, Category, Severity, Root Cause) Notion integration with database access Slack Bot token with chat:write scope 👥 Target Audience Engineering & DevOps teams managing APIs Customer support & SRE teams maintaining FAQs Product managers tracking recurring API issues SaaS orgs automating documentation & error visibility 🪜 Step-by-Step Setup Instructions 1️⃣ Connect your GitHub account and enable the “issues” webhook event. 2️⃣ Add OpenAI credentials (GPT-4o model for classification). 3️⃣ Create an Airtable base with fields: Error Code, Category, Root Cause, Severity, Confidence. 4️⃣ Configure your Notion database with matching schema and access. 5️⃣ Set up Slack credentials and choose your alert channels. 6️⃣ Test with a sample GitHub issue to validate AI classification. 7️⃣ Enable the workflow — enjoy continuous AI-powered issue documentation!
by JJ Tham
Generate AI Voiceovers from Scripts and Upload to Google Drive This is the final piece of the AI content factory. This workflow takes your text-based video scripts and automatically generates high-quality audio voiceovers for each one, turning your text into ready-to-use audio assets for your video ads. Go from a spreadsheet of text to a folder of audio files, completely on autopilot. ⚠️ CRITICAL REQUIREMENTS (Read First!) This is an advanced, self-hosted workflow that requires specific local setup: Self-Hosted n8n Only:** This workflow uses the Execute Command and Read/Write Files nodes, which requires you to run your own instance of n8n. It will not work on n8n Cloud. FFmpeg Installation:** You must have FFmpeg installed on the same machine where your n8n instance is running. This is used to convert the audio files to a standard format. What it does This is Part 3 of the AI marketing series. It connects to the Google Sheet where you generated your video scripts (in Part 2). For each script that hasn't been processed, it: Uses the Google Gemini Text-to-Speech (TTS) API to generate a voiceover. Saves the audio file to your local computer. Uses FFmpeg to convert the raw audio into a standard .wav file. Uploads the final .wav file to your Google Drive. Updates the original Google Sheet with a link to the audio file in Drive and marks the script as complete. How to set up IMPORTANT: This workflow is Part 3 of a series and requires the output from Part 2 ("Generate AI Video Ad Scripts"). If you need Part 1 or Part 2 of this workflow series, you can find them for free on my n8n Creator Profile. Connect to Your Scripts Sheet: In the "Getting Video Scripts" node, connect your Google Sheets account and provide the URL to the sheet containing your generated video scripts from Part 2. Configure AI Voice Generation (HTTP Request): In the "HTTP Request To Generate Voice" node, go to the Query Parameters and replace INSERT YOUR API KEY HERE with your Google Gemini API key. In the JSON Body, you can customize the voice prompt (e.g., change <INSERT YOUR DESIRED ACCENT HERE>). Set Your Local File Path: In the first "Read/Write Files from Disk" node, update the File Name field to a valid directory on your local machine where n8n has permission to write files. Replace /Users/INSERT_YOUR_LOCAL_STORAGE_HERE/. Connect Google Drive: In the "Uploading Wav File" node, connect your Google Drive account and choose the folder where your audio files will be saved. Update Your Tracking Sheet: In the final "Uploading Google Drive Link..." node, ensure it's connected to the same Google Sheet from Step 1. This node will update your sheet with the results. Name and Description for Submission Form Here are the name and description, updated with the new information, ready for you to copy and paste. Name: Generate AI Voiceovers from Scripts and Upload to Google Drive Description: Welcome to the final piece of the AI content factory! 🔊 This advanced workflow takes the video ad scripts you've generated and automatically creates high-quality audio voiceovers for each one, completing your journey from strategy to ready-to-use media assets. ⚠️ This is an advanced workflow for self-hosted n8n instances only and requires FFmpeg to be installed locally. ⚙️ How it works This workflow is Part 3 of a series. It reads your video scripts from a Google Sheet, then for each script it: Generates a voiceover using the Google Gemini TTS API. Saves the audio file to your local machine. Converts the file to a standard .wav format using FFmpeg. Uploads the final audio file to Google Drive. Updates your Google Sheet with a link to the new audio file. 👥 Who’s it for? Video Creators & Marketers: Mass-produce voiceovers for video ads, tutorials, or social media content without hiring voice actors. Automation Power Users: A powerful example of how n8n can bridge cloud APIs with local machine commands. Agencies: Drastically speed up the production of audio assets for client campaigns. 🛠️ How to set up This workflow requires specific local setup due to its advanced nature. IMPORTANT: This is Part 3 of a series. To find Part 1 ("Generate a Strategic Plan") and Part 2 ("Generate Video Scripts"), please visit my n8n Creator Profile where they are available for free. Setup involves connecting to your scripts sheet, configuring the AI voice API, setting a local file path for n8n to write to, and connecting your Google Drive.
by Budi SJ
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Multi Platform Content Generator from YouTube using AI & RSS This workflow automates content generation by monitoring YouTube channels, extracting transcripts via AI, and creating platform-optimized content for LinkedIn, X/Twitter, Threads, and Instagram. Ideal for creators, marketers, and social media managers aiming to scale content production with minimal effort. ✨ Key Features 🔔 Automated YouTube Monitoring** via RSS feed 🧠 AI-Powered Transcript Extraction** using Supadata API ✍️ Multi-Platform Content Generation** with OpenRouter AI 🎯 Platform Optimization** based on tone and character limits 📬 Telegram Notification** for easy preview 📊 Centralized Data Management via Google Sheets** > 🗂️ All video data, summaries, and generated content are tracked and stored in a single, centralized Google Sheets template > This ensures full visibility, easy access, and smooth collaboration across your team. ⚙️ Workflow Components 1. 🧭 Channel Monitoring Schedule Trigger**: Initiates workflow periodically Google Sheets (Read)**: Pulls YouTube channel URLs HTTP Request + HTML Parser**: Extracts channel IDs from URLs RSS Reader**: Fetches latest video metadata 2. 🧾 Content Processing Supadata API**: Extracts transcript from YouTube video OpenRouter AI**: Summarizes transcript + generates content per platform Conditional Check**: Prevents duplicate content by checking existing records 3. 📤 Multi-Platform Output LinkedIn**: Story-driven format (≤ 1300 characters) X/Twitter**: Short, punchy copy (≤ 280 characters) Threads**: Friendly, conversational Instagram**: Short captions for visual posts 4. 🗃️ Data Management Google Sheets (Write)**: Stores video metadata + generated posts Telegram Bot**: Sends content preview ID Tracking**: Avoids reprocessing using video ID 🔐 Required Credentials Google Sheets OAuth2** Supadata API** OpenRouter API** Telegram Bot Token & Chat ID** 🎁 Benefits ⌛ Save Time**: Automates transcript + content generation 🔊 Consistent Tone**: Adjust AI prompts for brand voice 📡 Multi-Platform Ready**: One video → multiple formats 📂 Centralized Logs via Google Sheets**: Easily track, audit, and collaborate 🚀 Scalable**: Handle many channels with ease